Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2120

EO becomes CrashLoopBackOff when deploy ES with more than 3 nodes

    XMLWordPrintable

Details

    • Bug
    • Resolution: Done
    • Undefined
    • Logging 5.4.0
    • Logging 5.4.0
    • Log Storage
    • None
    • False
    • False
    • NEW
    • VERIFIED
    • Logging (LogExp) - Sprint 213

    Description

      Deploy ES with more than 3 nodes, the EO becomes CrashLoopBackOff, and the ES pods can't be deployed:

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
      spec:
        managementState: "Managed"
        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 6h
            infra:
              maxAge: 3h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 4
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage:
              storageClassName: "gp2"
              size: "20Gi"
        visualization:
          type: "kibana"
          kibana:
            resources: {}
            replicas: 1
        collection:
          logs:
            type: "fluentd"
            fluentd: {}
      $ oc get pod -n openshift-operators-redhat
      NAME                                      READY   STATUS             RESTARTS        AGE
      elasticsearch-operator-5fff5b7d9d-t4c48   1/2     CrashLoopBackOff   5 (2m15s ago)   7m57s
      
      EO log:
      {"_ts":"2022-01-10T08:20:35.831827261Z","_level":"0","_component":"elasticsearch-operator","_message":"Failed to update Elasticsearch status. Trying again...","cluster":"elasticsearch","error":{"ErrStatus":{"metadata":{},"status":"Failure","message":"Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: spec.indexManagement.policies.phases.delete.pruneNamespacesInterval: Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'","reason":"Invalid","details":{"name":"elasticsearch","group":"logging.openshift.io","kind":"Elasticsearch","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'","field":"spec.indexManagement.policies.phases.delete.pruneNamespacesInterval"}]},"code":422}},"namespace":"openshift-logging"}
      {"_ts":"2022-01-10T08:20:35.832021207Z","_level":"0","_component":"elasticsearch-operator","_message":"Could not update CR for Elasticsearch","_error":{"msg":"Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: spec.indexManagement.policies.phases.delete.pruneNamespacesInterval: Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'"},"cluster":"elasticsearch","namespace":"openshift-logging","retries":0}
      {"_ts":"2022-01-10T08:20:35.838778293Z","_level":"0","_component":"elasticsearch-operator","_message":"Failed to update Elasticsearch status. Trying again...","cluster":"elasticsearch","error":{"ErrStatus":{"metadata":{},"status":"Failure","message":"Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: spec.indexManagement.policies.phases.delete.pruneNamespacesInterval: Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'","reason":"Invalid","details":{"name":"elasticsearch","group":"logging.openshift.io","kind":"Elasticsearch","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'","field":"spec.indexManagement.policies.phases.delete.pruneNamespacesInterval"}]},"code":422}},"namespace":"openshift-logging"}
      {"_ts":"2022-01-10T08:20:35.838833599Z","_level":"0","_component":"elasticsearch-operator","_message":"Could not update CR for Elasticsearch","_error":{"msg":"Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: spec.indexManagement.policies.phases.delete.pruneNamespacesInterval: Invalid value: \"\": spec.indexManagement.policies.phases.delete.pruneNamespacesInterval in body should match '^([0-9]+)([yMwdhHms]{0,1})$'"},"cluster":"elasticsearch","namespace":"openshift-logging","retries":0}
      panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1666462]
      
      
      goroutine 401 [running]:
      github.com/openshift/elasticsearch-operator/internal/elasticsearch.(*ElasticsearchRequest).populateNodes(0xc001d68640, 0x0, 0x0)
      	/go/src/github.com/openshift/elasticsearch-operator/internal/elasticsearch/cluster.go:312 +0x142
      github.com/openshift/elasticsearch-operator/internal/elasticsearch.(*ElasticsearchRequest).CreateOrUpdateElasticsearchCluster(0xc001d68640, 0x0, 0x0)
      	/go/src/github.com/openshift/elasticsearch-operator/internal/elasticsearch/cluster.go:58 +0x1ef
      github.com/openshift/elasticsearch-operator/internal/elasticsearch.Reconcile(0xc005ade000, 0x1c7d160, 0xc00007eb40, 0xc000799c20, 0x11)
      	/go/src/github.com/openshift/elasticsearch-operator/internal/elasticsearch/reconciler.go:184 +0x7b7
      github.com/openshift/elasticsearch-operator/controllers/logging.(*ElasticsearchReconciler).Reconcile(0xc0004aad50, 0x1c692d8, 0xc00263d380, 0xc000799c20, 0x11, 0xc000133770, 0xd, 0xc00263d380, 0xc000032000, 0x18b9680, ...)
      	/go/src/github.com/openshift/elasticsearch-operator/controllers/logging/elasticsearch_controller.go:85 +0x205
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00033d720, 0x1c69230, 0xc0006d6000, 0x1879020, 0xc000338040)
      	/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:298 +0x30d
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00033d720, 0x1c69230, 0xc0006d6000, 0xc00052ff00)
      	/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253 +0x205
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2(0xc0001337b0, 0xc00033d720, 0x1c69230, 0xc0006d6000)
      	/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:214 +0x6b
      created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
      	/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:210 +0x425

      CSV: 

      elasticsearch-operator.5.4.0-35

      Attachments

        Issue Links

          Activity

            People

              sasagarw@redhat.com Sashank Agarwal (Inactive)
              qitang@redhat.com Qiaoling Tang
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: