Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2731

CLO keeps reporting `Reconcile ServiceMonitor retry error` and `Reconcile Service retry error` after creating clusterlogging.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Undefined Undefined
    • Logging 5.5.0
    • Logging 5.5.0
    • Log Collection
    • None
    • False
    • None
    • False
    • NEW
    • VERIFIED

      Description of problem:

      The CLO keeps reporting below errors after creating clusterlogg/instance:

      $ oc logs cluster-logging-operator-659d4b779d-8sfh5
      {"_ts":"2022-06-15T00:44:12.062766295Z","_level":"0","_component":"cluster-logging-operator","_message":"starting up...","go_arch":"amd64","go_os":"linux","go_version":"go1.17.10","operator_version":"5.5"}
      I0615 00:44:14.140726       1 request.go:665] Waited for 1.045192702s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=32s
      {"_ts":"2022-06-15T00:44:15.64759681Z","_level":"0","_component":"cluster-logging-operator","_message":"Registering Components."}
      {"_ts":"2022-06-15T00:44:15.647778226Z","_level":"0","_component":"cluster-logging-operator","_message":"Starting the Cmd."}
      {"_ts":"2022-06-15T00:52:00.34499973Z","_level":"0","_component":"k8sHandler","_message":"Reconcile Service retry error"}
      {"_ts":"2022-06-15T00:52:00.45313171Z","_level":"0","_component":"k8sHandler","_message":"Reconcile ServiceMonitor retry error"}
      {"_ts":"2022-06-15T00:52:12.886586413Z","_level":"0","_component":"k8sHandler","_message":"Reconcile Service retry error"}
      ......
      {"_ts":"2022-06-15T02:11:12.704373383Z","_level":"0","_component":"k8sHandler","_message":"Reconcile ServiceMonitor retry error"}
      {"_ts":"2022-06-15T02:11:54.846579124Z","_level":"0","_component":"k8sHandler","_message":"Reconcile Service retry error"}
      {"_ts":"2022-06-15T02:11:54.852776401Z","_level":"0","_component":"k8sHandler","_message":"Reconcile ServiceMonitor retry error"} 

       

      The SVCs, serivecmonitors:

      $ oc get svc
      NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
      cluster-logging-operator-metrics   ClusterIP   172.30.85.90     <none>        8686/TCP             98m
      collector                          ClusterIP   172.30.200.214   <none>        24231/TCP,2112/TCP   90m
      elasticsearch                      ClusterIP   172.30.211.91    <none>        9200/TCP             90m
      elasticsearch-cluster              ClusterIP   172.30.179.153   <none>        9300/TCP             90m
      elasticsearch-metrics              ClusterIP   172.30.42.255    <none>        60001/TCP            90m
      kibana                             ClusterIP   172.30.122.214   <none>        443/TCP              90m
      $ oc get servicemonitor
      NAME                                       AGE
      cluster-logging-operator-metrics-monitor   98m
      collector                                  90m
      monitor-elasticsearch-cluster              90m 

       

      Version-Release number of selected component (if applicable):

      cluster-logging.5.5.0 

      How reproducible:

      Always

      Steps to Reproduce:
      1. subscribe CLO and EO
      2. create clusterlogging/instance

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
      spec:
        managementState: "Managed"
        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 12h 
            infra:
              maxAge: 12h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 3
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage:
              storageClassName: "gp2"
              size: "20Gi"
        visualization:
          type: "kibana"
          kibana:
            resources: {}
            replicas: 1
        collection:
          logs:
            type: fluentd
            fluentd: {}

      3. check CLO logs

      Actual results:

      Expected results:

      No errors

      Additional info

              vparfono Vitalii Parfonov
              qitang@redhat.com Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: