Uploaded image for project: 'Docs for Red Hat Developers'
  1. Docs for Red Hat Developers
  2. RHDEVDOCS-3841

Don't use ES limits/request lower than default/recommended

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Normal Normal
    • None
    • OpenShift 4.7 Async
    • Logging

      [URL]
      https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-deploying.html#cluster-logging-deploy-console_cluster-logging-deploying
      [ISSUE]
      This issue was fixed in the past in [1] and again it comes back.

      It's a problem set lower memory/cpu limits/request that the recommended/default in the examples since it's so much common to find customers with performance issues and saying: "I have just copied" it from the docs. Then, I'll ask to modify the examples to always uses at least the default values or higher.

      For example, in the example, it's possible to see:

      ~~~
      elasticsearch:
      nodeCount: 3
      storage:
      storageClassName: "<storage_class_name>"
      size: 200G
      resources:
      requests:
      memory: "8Gi"
      ~~~

      And the default values are 16G, like this:

      ~~~
      elasticsearch:
      nodeCount: 3
      storage:
      storageClassName: "<storage_class_name>"
      size: 200G
      resources:
      requests:
      memory: "16Gi"
      ~~~

      In the documentation is explained that the default if you don't set it's 16G by default, but, we shouldn't use in the examples values that they have a small value than the default since a lot of times when you see that, you feel that it works and really 8G is usually a bad value for one Elasticsearch and it introduces performance issues.

      [1]
      https://issues.redhat.com/browse/RHDEVDOCS-2625

              Unassigned Unassigned
              cbremble@redhat.com Claire Bremble
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: