Uploaded image for project: 'Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces) '
  1. Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces)
  2. CRW-2549

Investigate memory consumption of che-operator with growing number of namespaces on the cluster

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Major Major
    • 2.14.0.GA
    • 2.14.0.GA
    • docs
    • False
    • False
    • Hide
      = Improved memory consumption of the {prod-short} Operator

      This enhancement improves memory consumption of the {prod-short} Operator in a Kubernetes cluster with many namespaces.
      Show
      = Improved memory consumption of the {prod-short} Operator This enhancement improves memory consumption of the {prod-short} Operator in a Kubernetes cluster with many namespaces.

      Synced from Eclipse Che issue

      https://github.com/eclipse/che/issues/20647

      Is your task related to a problem? Please describe

      che-operator pod is OOMKilled when there are a lot of namespaces on the cluster.
      It has been fixed by [1] but still I can observer that operator consumes too much memory.
      We have to find the reason and fix it.

      [1] https://issues.redhat.com/browse/CRW-2383
      [2] https://github.com/eclipse-che/che-operator/pull/1146

      Describe the solution you'd like

      N/A

      Describe alternatives you've considered

      No response

      Additional context

      https://github.com/devfile/devworkspace-operator/issues/616
      https://github.com/eclipse/che/issues/20529

      Release Notes Text

      Title: Improved operator memory consumption
      Content: Fixed an operator issue that forced to augment the memory requirement based on the number of the namespaces on the cluster.

            Unassigned Unassigned
            jiralint.codeready Bot Codeready
            Max Leonov Max Leonov
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: