Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2752

Elasticsearch operator is not working while deploying for specific namespace. EO - 5.4

    XMLWordPrintable

Details

    • Bug
    • Resolution: Not a Bug
    • Major
    • None
    • Logging 5.4.2
    • Log Storage
    • False
    • None
    • False
    • NEW
    • NEW

    Description

      Issue - Kibana pods is not scaling up while deploying elasticsearch operator for a specific namespace.

      Errors in elasticsearch Operator 5.4:

      ~~~
      {"_ts":"2022-06-20T17:14:18.610236923Z","_level":"0","_component":"elasticsearch-operator_controller_kibana-controller","_message":"Reconciler error","_error":{"cause":{"cause":{"ErrStatus":{"metadata":{},"status":"Failure","message":"ImageStream.image.openshift.io \"oauth-proxy\" not found","reason":"NotFound","details":{"name":"oauth-proxy","group":"image.openshift.io","kind":"ImageStream"},"code":404}},"msg":"failed to get ImageStream","name":"oauth-proxy","namespace":"openshift"},"msg":"Failed to get oauth-proxy image"},"name":"kibana","namespace":"openshift-logging"}
      ~~~ 

      Pod:

      ~~~

      $ oc get pods
      NAME                                           READY   STATUS      RESTARTS   AGE
      cluster-logging-operator-dc999b6b8-4wxqv       1/1     Running     0          20h
      collector-88tng                                2/2     Running     0          20h
      collector-h458f                                2/2     Running     0          20h
      collector-p99tf                                2/2     Running     0          20h
      collector-ss542                                2/2     Running     0          20h
      collector-v2cbp                                2/2     Running     0          20h
      collector-xdcm2                                2/2     Running     0          20h
      elasticsearch-cdm-q8417jsw-1-795f6866b-8c26k   2/2     Running     0          20h
      elasticsearch-im-app-27596940-vvzzz            0/1     Completed   0          3m39s
      elasticsearch-im-audit-27596940-44b4f          0/1     Completed   0          3m39s
      elasticsearch-im-infra-27596940-g8nq9          0/1     Completed   0          3m39s
      elasticsearch-operator-79df445b44-s6hxd        2/2     Running     0          20h

      ~~~

      In the same way, when we deploy EO 5.3 all pods are scaling up without any issue:

      ~~~

      $ oc get pods
      NAME                                            READY   STATUS    RESTARTS   AGE
      cluster-logging-operator-6cdcf7df5f-96v5t       1/1     Running   0          3m5s
      collector-78v24                                 2/2     Running   0          103s
      collector-88b99                                 2/2     Running   0          2m7s
      collector-fnpg2                                 2/2     Running   0          112s
      collector-gdgsk                                 2/2     Running   0          74s
      collector-hj9nj                                 2/2     Running   0          87s
      collector-wsqh7                                 2/2     Running   0          61s
      elasticsearch-cdm-jsmds3wv-1-79c7ccc69d-s8kkg   2/2     Running   0          2m47s
      elasticsearch-operator-78d5f6d868-nv4rm         2/2     Running   0          4m5s
      kibana-74b6cbbfd4-kh592                         2/2     Running   0          2m47s
      [quicklab@upi-0 ~]$ oc get csv
      NAME                           DISPLAY                            VERSION   REPLACES   PHASE
      cluster-logging.5.3.8          Red Hat OpenShift Logging          5.3.8                Succeeded
      elasticsearch-operator.5.3.8   OpenShift Elasticsearch Operator   5.3.8                Succeeded

      ~~~

       

      Reproducible - Everytime

       

       

       

       

      Attachments

        Activity

          People

            rhn-support-aharchin Akhil Harchinder (Inactive)
            rhn-support-aharchin Akhil Harchinder (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: