-
Bug
-
Resolution: Done
-
Normal
-
None
-
None
-
1
-
False
-
None
-
False
-
-
-
-
MCO Sprint 24, MCO Sprint 25
-
No
Description of problem:
While deploying many SNO clusters it has been observed that pods within "open-cluster-management-addon-observability" are being replaced frequently.
# oc --kubeconfig /root/hv-vm/kc/vm00030/kubeconfig get po -n open-cluster-management-addon-observability NAME READY STATUS RESTARTS AGE endpoint-observability-operator-758886b9cf-p6z9k 1/1 Running 0 17m metrics-collector-deployment-8556767ff8-zglcw 1/1 Running 0 17m
It appears the pods are being replaced (not restarted) and are cycling every ~20 minutes, this is especially noticeable when viewing the other open-cluster-management pods:
# oc --kubeconfig /root/hv-vm/kc/vm00030/kubeconfig get po -A | grep open-cluster-management open-cluster-management-addon-observability endpoint-observability-operator-758886b9cf-p6z9k 1/1 Running 0 17m open-cluster-management-addon-observability metrics-collector-deployment-8556767ff8-zglcw 1/1 Running 0 17m open-cluster-management-agent-addon cluster-proxy-proxy-agent-65557f7dff-chqwf 3/3 Running 0 24h open-cluster-management-agent-addon config-policy-controller-d55f668c4-nwwqv 2/2 Running 0 24h open-cluster-management-agent-addon governance-policy-framework-c6d67bbff-dvnkl 2/2 Running 2 (3h18m ago) 24h open-cluster-management-agent-addon klusterlet-addon-search-57cb7d8f8b-5rgwd 1/1 Running 0 24h open-cluster-management-agent-addon klusterlet-addon-workmgr-7fc66d96c6-j4sbw 1/1 Running 0 24h open-cluster-management-agent-addon managed-serviceaccount-addon-agent-7fd5986d54-bdzdm 1/1 Running 0 24h open-cluster-management-agent klusterlet-7467d8c57d-t47lg 1/1 Running 0 24h open-cluster-management-agent klusterlet-agent-77f4d8bb7d-kgm2p 1/1 Running 0 24h
Version-Release number of selected component (if applicable):
ACM - 2.11.0-DOWNSTREAM-2024-05-23-15-16-26
Hub OCP - 4.16.0-rc.3
Deployed SNOs - 4.14.16 (Though this is also occurring in the large scale environment with 4.15.16 and 4.16.0-rc.3)
How reproducible:
Steps to Reproduce:
- ...
Actual results:
Expected results:
Additional info:
It seems the deploys are being scaled up and down frequently which seems unnecessary:
# oc --kubeconfig /root/hv-vm/kc/vm00030/kubeconfig describe deploy -n open-cluster-management-addon-observability endpoint-observability-operator Name: endpoint-observability-operator Namespace: open-cluster-management-addon-observability CreationTimestamp: Tue, 04 Jun 2024 18:55:04 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 224 Selector: name=endpoint-observability-operator Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: name=endpoint-observability-operator Annotations: target.workload.openshift.io/management: {"effect":"PreferredDuringScheduling"} Service Account: endpoint-observability-operator-sa Containers: endpoint-observability-operator: Image: e38-h01-000-r650.rdu2.scalelab.redhat.com:5000/acm-d/endpoint-monitoring-rhel9-operator@sha256:06e3c7b67ff801873e951f47171e2ee2cba9603f0363e646e0c313a7f1354ee4 Port: 8383/TCP Host Port: 0/TCP Command: endpoint-monitoring-operator Requests: cpu: 2m memory: 50Mi Environment: HUB_NAMESPACE: vm00030 WATCH_NAMESPACE: (v1:metadata.namespace) POD_NAME: (v1:metadata.name) SERVICE_ACCOUNT: (v1:spec.serviceAccountName) OPERATOR_NAME: endpoint-monitoring-operator HUB_KUBECONFIG: /spoke/hub-kubeconfig/kubeconfig INSTALL_PROM: false PULL_SECRET: multiclusterhub-operator-pull-secret Mounts: /spoke/hub-kubeconfig from hub-kubeconfig-secret (ro) Volumes: hub-kubeconfig-secret: Type: Secret (a volume populated by a Secret) SecretName: observability-controller-hub-kubeconfig Optional: false Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: endpoint-observability-operator-66f5bb4db6 (0/0 replicas created), endpoint-observability-operator-864b5db55f (0/0 replicas created) NewReplicaSet: endpoint-observability-operator-758886b9cf (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 150m (x30 over 5h30m) deployment-controller Scaled down replica set endpoint-observability-operator-758886b9cf to 0 from 1 Normal ScalingReplicaSet 141m (x33 over 5h30m) deployment-controller Scaled up replica set endpoint-observability-operator-864b5db55f to 1 from 0 Normal ScalingReplicaSet 115m (x7 over 135m) deployment-controller Scaled down replica set endpoint-observability-operator-864b5db55f to 0 from 1 Normal ScalingReplicaSet 71m (x16 over 135m) deployment-controller Scaled up replica set endpoint-observability-operator-758886b9cf to 1 from 0 Normal ScalingReplicaSet 64m (x17 over 135m) deployment-controller Scaled down replica set endpoint-observability-operator-758886b9cf to 0 from 1 Normal ScalingReplicaSet 58m (x18 over 135m) deployment-controller Scaled up replica set endpoint-observability-operator-864b5db55f to 1 from 0 Normal ScalingReplicaSet 10m (x6 over 50m) deployment-controller Scaled up replica set endpoint-observability-operator-864b5db55f to 1 from 0 Normal ScalingReplicaSet 10m (x6 over 50m) deployment-controller Scaled down replica set endpoint-observability-operator-758886b9cf to 0 from 1 Normal ScalingReplicaSet 10m (x6 over 50m) deployment-controller Scaled up replica set endpoint-observability-operator-758886b9cf to 1 from 0 Normal ScalingReplicaSet 10m (x6 over 50m) deployment-controller Scaled down replica set endpoint-observability-operator-864b5db55f to 0 from 1
The other deploy:
Name: metrics-collector-deployment Namespace: open-cluster-management-addon-observability CreationTimestamp: Tue, 04 Jun 2024 18:56:00 +0000 Labels: <none> Annotations: deployment.kubernetes.io/revision: 346 owner: observabilityaddon Selector: component=metrics-collector Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: cert/time-restarted=2024-6-5.1904 component=metrics-collector Annotations: owner: observabilityaddon target.workload.openshift.io/management: {"effect":"PreferredDuringScheduling"} Service Account: endpoint-observability-operator-sa Containers: metrics-collector: Image: e38-h01-000-r650.rdu2.scalelab.redhat.com:5000/acm-d/metrics-collector-rhel9@sha256:8e37489021df05d276d92b35d0fd329e62ca691797ebcf0415f60211de8bc036 Port: 8080/TCP Host Port: 0/TCP Command: /usr/bin/metrics-collector --listen=:8080 --from=$(FROM) --from-query=$(FROM_QUERY) --to-upload=$(TO) --to-upload-ca=/tlscerts/ca/ca.crt --to-upload-cert=/tlscerts/certs/tls.crt --to-upload-key=/tlscerts/certs/tls.key --interval=300s --evaluate-interval=30s --limit-bytes=1073741824 --label="cluster=vm00030" --label="clusterID=a36f30ba-21de-43cb-b454-139ad3f2f5cb" --from-token-file=/var/run/secrets/kubernetes.io/serviceaccount/token --from-ca-file=/etc/serving-certs-ca-bundle/service-ca.crt --label="clusterType=SNO" --collectrule={"name":"SNOHighCPUUsage","expr":"(1 - avg(rate(node_cpu_seconds_total{mode=\"idle\"}[5m]))) * 100 > 70","for":"2m","names":["container_cpu_cfs_periods_total","container_cpu_cfs_throttled_periods_total","kube_pod_container_resource_limits","kube_pod_container_resource_requests","namespace_workload_pod:kube_pod_owner:relabel","node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate","node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate"],"matches":[]} --collectrule={"name":"SNOHighMemoryUsage","expr":"(1 - sum(:node_memory_MemAvailable_bytes:sum) / sum(kube_node_status_allocatable{resource=\"memory\"})) * 100 > 70","for":"2m","names":["kube_pod_container_resource_limits","kube_pod_container_resource_requests","namespace_workload_pod:kube_pod_owner:relabel"],"matches":["__name__=\"container_memory_cache\",container!=\"\"","__name__=\"container_memory_rss\",container!=\"\"","__name__=\"container_memory_swap\",container!=\"\"","__name__=\"container_memory_working_set_bytes\",container!=\"\""]} --match={__name__=":node_memory_MemAvailable_bytes:sum"} --match={__name__="ALERTS"} --match={__name__="acm_managed_cluster_labels"} --match={__name__="authenticated_user_requests"} --match={__name__="authentication_attempts"} --match={__name__="cluster:capacity_cpu_cores:sum"} --match={__name__="cluster:capacity_memory_bytes:sum"} --match={__name__="cluster:container_cpu_usage:ratio"} --match={__name__="cluster:container_spec_cpu_shares:ratio"} --match={__name__="cluster:cpu_usage_cores:sum"} --match={__name__="cluster:memory_usage:ratio"} --match={__name__="cluster:memory_usage_bytes:sum"} --match={__name__="cluster:node_cpu:ratio"} --match={__name__="cluster:usage:resources:sum"} --match={__name__="cluster_infrastructure_provider"} --match={__name__="cluster_version"} --match={__name__="cluster_version_payload"} --match={__name__="container_spec_cpu_quota"} --match={__name__="coredns_dns_requests_total"} --match={__name__="coredns_dns_request_duration_seconds_sum"} --match={__name__="coredns_forward_responses_total"} --match={__name__="csv_succeeded"} --match={__name__="csv_abnormal"} --match={__name__="etcd_debugging_mvcc_db_total_size_in_bytes"} --match={__name__="etcd_mvcc_db_total_size_in_bytes"} --match={__name__="etcd_debugging_snap_save_total_duration_seconds_sum"} --match={__name__="etcd_disk_backend_commit_duration_seconds_bucket"} --match={__name__="etcd_disk_backend_commit_duration_seconds_sum"} --match={__name__="etcd_disk_wal_fsync_duration_seconds_bucket"} --match={__name__="etcd_disk_wal_fsync_duration_seconds_sum"} --match={__name__="etcd_object_counts"} --match={__name__="etcd_network_client_grpc_received_bytes_total"} --match={__name__="etcd_network_client_grpc_sent_bytes_total"} --match={__name__="etcd_network_peer_received_bytes_total"} --match={__name__="etcd_network_peer_sent_bytes_total"} --match={__name__="etcd_server_client_requests_total"} --match={__name__="etcd_server_has_leader"} --match={__name__="etcd_server_health_failures"} --match={__name__="etcd_server_leader_changes_seen_total"} --match={__name__="etcd_server_proposals_failed_total"} --match={__name__="etcd_server_proposals_pending"} --match={__name__="etcd_server_proposals_committed_total"} --match={__name__="etcd_server_proposals_applied_total"} --match={__name__="etcd_server_quota_backend_bytes"} --match={__name__="grpc_server_started_total"} --match={__name__="haproxy_backend_connection_errors_total"} --match={__name__="haproxy_backend_connections_total"} --match={__name__="haproxy_backend_current_queue"} --match={__name__="haproxy_backend_http_average_response_latency_milliseconds"} --match={__name__="haproxy_backend_max_sessions"} --match={__name__="haproxy_backend_response_errors_total"} --match={__name__="haproxy_backend_up"} --match={__name__="http_requests_total"} --match={__name__="instance:node_filesystem_usage:sum"} --match={__name__="instance:node_cpu_utilisation:rate1m"} --match={__name__="instance:node_load1_per_cpu:ratio"} --match={__name__="instance:node_memory_utilisation:ratio"} --match={__name__="instance:node_network_receive_bytes_excluding_lo:rate1m"} --match={__name__="instance:node_network_receive_drop_excluding_lo:rate1m"} --match={__name__="instance:node_network_transmit_bytes_excluding_lo:rate1m"} --match={__name__="instance:node_network_transmit_drop_excluding_lo:rate1m"} --match={__name__="instance:node_num_cpu:sum"} --match={__name__="instance:node_vmstat_pgmajfault:rate1m"} --match={__name__="instance_device:node_disk_io_time_seconds:rate1m"} --match={__name__="instance_device:node_disk_io_time_weighted_seconds:rate1m"} --match={__name__="kube_daemonset_status_desired_number_scheduled"} --match={__name__="kube_daemonset_status_number_unavailable"} --match={__name__="kube_node_spec_unschedulable"} --match={__name__="kube_node_status_allocatable"} --match={__name__="kube_node_status_allocatable_cpu_cores"} --match={__name__="kube_node_status_allocatable_memory_bytes"} --match={__name__="kube_node_status_capacity"} --match={__name__="kube_node_status_capacity_pods"} --match={__name__="kube_node_status_capacity_cpu_cores"} --match={__name__="kube_node_status_condition"} --match={__name__="kube_pod_container_resource_limits_cpu_cores"} --match={__name__="kube_pod_container_resource_limits_memory_bytes"} --match={__name__="kube_pod_container_resource_requests_cpu_cores"} --match={__name__="kube_pod_container_resource_requests_memory_bytes"} --match={__name__="kube_pod_info"} --match={__name__="kube_pod_owner"} --match={__name__="kube_resourcequota"} --match={__name__="kubelet_running_container_count"} --match={__name__="kubelet_runtime_operations"} --match={__name__="kubelet_runtime_operations_duration_seconds_sum"} --match={__name__="kubelet_volume_stats_available_bytes"} --match={__name__="kubelet_volume_stats_capacity_bytes"} --match={__name__="kube_persistentvolume_status_phase"} --match={__name__="machine_cpu_cores"} --match={__name__="machine_memory_bytes"} --match={__name__="mce_hs_addon_request_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_low_qps_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_medium_qps_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_high_qps_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_average_qps_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_total_hosted_control_planes_gauge"} --match={__name__="mce_hs_addon_available_hosted_control_planes_gauge"} --match={__name__="mce_hs_addon_available_hosted_clusters_gauge"} --match={__name__="mce_hs_addon_deleted_hosted_clusters_gauge"} --match={__name__="mce_hs_addon_hypershift_operator_degraded_bool"} --match={__name__="mce_hs_addon_hosted_control_planes_status_gauge"} --match={__name__="mce_hs_addon_qps_based_hcp_capacity_gauge"} --match={__name__="mce_hs_addon_worker_node_resource_capacities_gauge"} --match={__name__="mce_hs_addon_qps_gauge"} --match={__name__="mce_hs_addon_request_based_hcp_capacity_current_gauge"} --match={__name__="mixin_pod_workload"} --match={__name__="namespace:kube_pod_container_resource_requests_cpu_cores:sum"} --match={__name__="namespace_memory:kube_pod_container_resource_requests:sum"} --match={__name__="namespace:container_memory_usage_bytes:sum"} --match={__name__="namespace_cpu:kube_pod_container_resource_requests:sum"} --match={__name__="node_cpu_seconds_total"} --match={__name__="node_filesystem_avail_bytes"} --match={__name__="node_filesystem_free_bytes"} --match={__name__="node_filesystem_size_bytes"} --match={__name__="node_memory_MemAvailable_bytes"} --match={__name__="node_netstat_Tcp_OutSegs"} --match={__name__="node_netstat_Tcp_RetransSegs"} --match={__name__="node_netstat_TcpExt_TCPSynRetrans"} --match={__name__="policyreport_info"} --match={__name__="up"} --match={__name__="prometheus_operator_reconcile_errors_total"} --match={__name__="prometheus_operator_reconcile_operations_total"} --match={__name__="cluster_operator_conditions"} --match={__name__="cluster_operator_up"} --match={__name__="cluster:policy_governance_info:propagated_count"} --match={__name__="cluster:policy_governance_info:propagated_noncompliant_count"} --match={__name__="policy:policy_governance_info:propagated_count"} --match={__name__="policy:policy_governance_info:propagated_noncompliant_count"} --match={__name__="cnv:vmi_status_running:count"} --match={__name__="kubevirt_hyperconverged_operator_health_status"} --match={__name__="workqueue_queue_duration_seconds_bucket",job="apiserver"} --match={__name__="workqueue_adds_total",job="apiserver"} --match={__name__="workqueue_depth",job="apiserver"} --match={__name__="go_goroutines",job="apiserver"} --match={__name__="process_cpu_seconds_total",job="apiserver"} --match={__name__="process_resident_memory_bytes",job="apiserver"} --rename="etcd_mvcc_db_total_size_in_bytes=etcd_debugging_mvcc_db_total_size_in_bytes" --rename="mixin_pod_workload=namespace_workload_pod:kube_pod_owner:relabel" --rename="namespace:kube_pod_container_resource_requests_cpu_cores:sum=namespace_cpu:kube_pod_container_resource_requests:sum" --rename="node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate=node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate" --recordingrule={"name":"apiserver_request_duration_seconds:histogram_quantile_99","query":"histogram_quantile(0.99,sum(rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\", verb!=\"WATCH\"}[5m])) by (le))"} --recordingrule={"name":"apiserver_request_duration_seconds:histogram_quantile_99:instance","query":"histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{job=\"apiserver\", verb!=\"WATCH\"}[5m])) by (le, verb, instance))"} --recordingrule={"name":"sum:apiserver_request_total:1h","query":"sum(rate(apiserver_request_total{job=\"apiserver\"}[1h])) by(code, instance)"} --recordingrule={"name":"sum:apiserver_request_total:5m","query":"sum(rate(apiserver_request_total{job=\"apiserver\"}[5m])) by(code, instance)"} --recordingrule={"name":"rpc_rate:grpc_server_handled_total:sum_rate","query":"sum(rate(grpc_server_handled_total{job=\"etcd\",grpc_type=\"unary\",grpc_code!=\"OK\"}[5m]))"} --recordingrule={"name":"active_streams_watch:grpc_server_handled_total:sum","query":"sum(grpc_server_started_total{job=\"etcd\",grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{job=\"etcd\",grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"})"} --recordingrule={"name":"active_streams_lease:grpc_server_handled_total:sum","query":"sum(grpc_server_started_total{job=\"etcd\",grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{job=\"etcd\",grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"})"} --recordingrule={"name":"cluster:kube_pod_container_resource_requests:cpu:sum","query":"sum(sum(sum(kube_pod_container_resource_requests{resource=\"cpu\"}) by (pod,namespace,container) * on(pod,namespace) group_left(phase) max(kube_pod_status_phase{phase=~\"Running|Pending|Unknown\"} >0) by (pod,namespace,phase)) by (pod,namespace,phase))"} --recordingrule={"name":"cluster:kube_pod_container_resource_requests:memory:sum","query":"sum(sum(sum(kube_pod_container_resource_requests{resource=\"memory\"}) by (pod,namespace,container) * on(pod,namespace) group_left(phase) max(kube_pod_status_phase{phase=~\"Running|Pending|Unknown\"} >0) by (pod,namespace,phase)) by (pod,namespace,phase))"} --recordingrule={"name":"sli:apiserver_request_duration_seconds:trend:1m","query":"sum(increase(apiserver_request_duration_seconds_bucket{job=\"apiserver\",service=\"kubernetes\",le=\"1\",verb=~\"POST|PUT|DELETE|PATCH\"}[1m])) / sum(increase(apiserver_request_duration_seconds_count{job=\"apiserver\",service=\"kubernetes\",verb=~\"POST|PUT|DELETE|PATCH\"}[1m]))"} --recordingrule={"name":"container_memory_rss:sum","query":"sum(container_memory_rss) by (container, namespace)"} --recordingrule={"name":"kube_pod_container_resource_limits:sum","query":"sum(kube_pod_container_resource_limits) by (resource, namespace)"} --recordingrule={"name":"kube_pod_container_resource_requests:sum","query":"sum(kube_pod_container_resource_requests{container!=\"\"}) by (resource, namespace)"} --recordingrule={"name":"namespace_workload_pod:kube_pod_owner:relabel:avg","query":"count(avg(namespace_workload_pod:kube_pod_owner:relabel{pod!=\"\"}) by (workload, namespace)) by (namespace)"} --recordingrule={"name":"node_namespace_pod_container:container_cpu_usage_seconds_total:sum","query":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{container!=\"\"}) by (namespace) or sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate{container!=\"\"}) by (namespace)"} Requests: cpu: 10m memory: 100Mi Environment: FROM: https://prometheus-k8s.openshift-monitoring.svc:9091 FROM_QUERY: https://prometheus-k8s.openshift-monitoring.svc:9091 TO: https://observatorium-api-open-cluster-management-observability.apps.acm-lta.rdu2.scalelab.redhat.com/api/metrics/v1/default/api/v1/receive Mounts: /etc/serving-certs-ca-bundle from serving-certs-ca-bundle (rw) /tlscerts/ca from mtlsca (rw) /tlscerts/certs from mtlscerts (rw) Volumes: mtlscerts: Type: Secret (a volume populated by a Secret) SecretName: observability-controller-open-cluster-management.io-observability-signer-client-cert Optional: false mtlsca: Type: Secret (a volume populated by a Secret) SecretName: observability-managed-cluster-certs Optional: false secret-kube-rbac-proxy-tls: Type: Secret (a volume populated by a Secret) SecretName: metrics-collector-kube-rbac-tls Optional: false secret-kube-rbac-proxy-metric: Type: Secret (a volume populated by a Secret) SecretName: metrics-collector-kube-rbac-proxy-metric Optional: false metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-collector-clientca-metric Optional: false serving-certs-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-collector-serving-certs-ca-bundle Optional: false Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: metrics-collector-deployment-5975cf4dd5 (0/0 replicas created), metrics-collector-deployment-9b7cf969f (0/0 replicas created), metrics-collector-deployment-676d87bd4f (0/0 replicas created), metrics-collector-deployment-845f49d8f7 (0/0 replicas created), metrics-collector-deployment-74476d9897 (0/0 replicas created), metrics-collector-deployment-b75df99cf (0/0 replicas created), metrics-collector-deployment-8556767ff8 (0/0 replicas created), metrics-collector-deployment-659499dd4f (0/0 replicas created), metrics-collector-deployment-d76fff759 (0/0 replicas created), metrics-collector-deployment-cc84469cb (0/0 replicas created) NewReplicaSet: metrics-collector-deployment-68c55d6849 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 172m deployment-controller Scaled up replica set metrics-collector-deployment-59dd6689f9 to 1 Normal ScalingReplicaSet 172m (x15 over 5h2m) deployment-controller Scaled up replica set metrics-collector-deployment-5975cf4dd5 to 1 from 0 Normal ScalingReplicaSet 172m deployment-controller Scaled down replica set metrics-collector-deployment-59dd6689f9 to 0 from 1 Normal ScalingReplicaSet 168m deployment-controller Scaled up replica set metrics-collector-deployment-7bc96c45f5 to 1 Normal ScalingReplicaSet 142m (x67 over 4h52m) deployment-controller (combined from similar events): Scaled up replica set metrics-collector-deployment-8b9849768 to 1 Normal ScalingReplicaSet 137m deployment-controller Scaled up replica set metrics-collector-deployment-755c8988b5 to 1 Normal ScalingReplicaSet 137m deployment-controller Scaled up replica set metrics-collector-deployment-755c8988b5 to 1 from 0 Normal ScalingReplicaSet 137m (x2 over 137m) deployment-controller Scaled down replica set metrics-collector-deployment-755c8988b5 to 0 from 1 Normal ScalingReplicaSet 135m deployment-controller Scaled up replica set metrics-collector-deployment-59f88b7488 to 1 Normal ScalingReplicaSet 135m deployment-controller Scaled up replica set metrics-collector-deployment-59f88b7488 to 1 from 0 Normal ScalingReplicaSet 132m deployment-controller Scaled up replica set metrics-collector-deployment-6fc8b98f4 to 1 Normal ScalingReplicaSet 132m (x2 over 132m) deployment-controller Scaled down replica set metrics-collector-deployment-6fc8b98f4 to 0 from 1 Normal ScalingReplicaSet 132m (x2 over 135m) deployment-controller Scaled down replica set metrics-collector-deployment-59f88b7488 to 0 from 1 Normal ScalingReplicaSet 122m deployment-controller Scaled up replica set metrics-collector-deployment-778d66db75 to 1 Normal ScalingReplicaSet 122m deployment-controller Scaled down replica set metrics-collector-deployment-778d66db75 to 0 from 1 Normal ScalingReplicaSet 122m deployment-controller Scaled up replica set metrics-collector-deployment-778d66db75 to 1 from 0 Normal ScalingReplicaSet 121m (x2 over 121m) deployment-controller Scaled down replica set metrics-collector-deployment-67bf7d66b6 to 0 from 1 Normal ScalingReplicaSet 121m (x2 over 122m) deployment-controller Scaled up replica set metrics-collector-deployment-67bf7d66b6 to 1 from 0 Normal ScalingReplicaSet 92m (x27 over 132m) deployment-controller (combined from similar events): Scaled down replica set metrics-collector-deployment-7cb9666698 to 0 from 1 Normal ScalingReplicaSet 72m (x11 over 132m) deployment-controller Scaled up replica set metrics-collector-deployment-5975cf4dd5 to 1 from 0 Normal ScalingReplicaSet 72m (x12 over 135m) deployment-controller Scaled down replica set metrics-collector-deployment-5975cf4dd5 to 0 from 1 Normal ScalingReplicaSet 72m deployment-controller Scaled up replica set metrics-collector-deployment-9b7cf969f to 1 from 0 Normal ScalingReplicaSet 66m deployment-controller Scaled up replica set metrics-collector-deployment-676d87bd4f to 1 Normal ScalingReplicaSet 66m deployment-controller Scaled down replica set metrics-collector-deployment-676d87bd4f to 0 from 1 Normal ScalingReplicaSet 59m deployment-controller Scaled up replica set metrics-collector-deployment-845f49d8f7 to 1 Normal ScalingReplicaSet 52m deployment-controller Scaled up replica set metrics-collector-deployment-74476d9897 to 1 Normal ScalingReplicaSet 52m (x2 over 52m) deployment-controller Scaled down replica set metrics-collector-deployment-74476d9897 to 0 from 1 Normal ScalingReplicaSet 52m deployment-controller Scaled up replica set metrics-collector-deployment-74476d9897 to 1 from 0 Normal ScalingReplicaSet 51m deployment-controller Scaled up replica set metrics-collector-deployment-b75df99cf to 1 Normal ScalingReplicaSet 51m deployment-controller Scaled down replica set metrics-collector-deployment-b75df99cf to 0 from 1 Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set metrics-collector-deployment-8556767ff8 to 1 Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set metrics-collector-deployment-659499dd4f to 1 from 0 Normal ScalingReplicaSet 15m (x2 over 15m) deployment-controller Scaled up replica set metrics-collector-deployment-5975cf4dd5 to 1 from 0 Normal ScalingReplicaSet 15m deployment-controller Scaled down replica set metrics-collector-deployment-659499dd4f to 0 from 1 Normal ScalingReplicaSet 15m deployment-controller Scaled down replica set metrics-collector-deployment-8556767ff8 to 0 from 1 Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set metrics-collector-deployment-d76fff759 to 1 Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set metrics-collector-deployment-d76fff759 to 0 from 1 Normal ScalingReplicaSet 13m deployment-controller Scaled up replica set metrics-collector-deployment-cc84469cb to 1 Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set metrics-collector-deployment-cc84469cb to 0 from 1 Normal ScalingReplicaSet 12m (x3 over 12m) deployment-controller (combined from similar events): Scaled up replica set metrics-collector-deployment-68c55d6849 to 1 from 0 Normal ScalingReplicaSet 11m (x3 over 35m) deployment-controller Scaled down replica set metrics-collector-deployment-5975cf4dd5 to 0 from 1