Uploaded image for project: 'OpenShift GitOps'
  1. OpenShift GitOps
  2. GITOPS-5665

High memory utilization of `openshift-gitops-operator-controller-manager` pod Post upgrade to Gitops operator v1.14.0

XMLWordPrintable

    • 5
    • False
    • None
    • False
    • Hide
      Before this update, resource limits were set on the operator container as a recommended practice. However, these limits caused functional issues on clusters with a large number of secrets and configmaps due to the operator requiring more memory than allowed by these limits. This update removes the resource limits for the manager container to minimize the impact on functionality. Ongoing efforts to optimize memory consumption are planned for future releases.
      Show
      Before this update, resource limits were set on the operator container as a recommended practice. However, these limits caused functional issues on clusters with a large number of secrets and configmaps due to the operator requiring more memory than allowed by these limits. This update removes the resource limits for the manager container to minimize the impact on functionality. Ongoing efforts to optimize memory consumption are planned for future releases.
    • GitOps Crimson - Sprint 3264

      Description of Problem:
       
      -  High memory utilization of `openshift-gitops-operator-controller-manager` pod has been observed post upgrade to operator v 1.14.0 

      Additional Info:

      • The "manager" container in the openshift-gitops-operator-controller-manager pod goes into CrashLoopBackOff state and have multiple restart count. 

       
      Prerequisites/Environment:

      openshift-gitops-operator v 1.14.0

      Following is the example of the status of the manager container:

      Containers:
        manager:
          Container ID:  cri-o:// 
          Image:         registry.redhat.io/openshift-gitops-1/gitops-rhel8-operator@sha256:
          Image ID:      registry.redhat.io/openshift-gitops-1/gitops-rhel8-operator@sha256: 
          Port:          9443/TCP
          Host Port:     0/TCP
          Command:
            /usr/local/bin/manager
          Args:
            --health-probe-bind-address=:8081
            --metrics-bind-address=127.0.0.1:8080
            --leader-elect
          State:          Waiting
            Reason:       CrashLoopBackOff
          Last State:     Terminated
            Reason:       OOMKilled
            Exit Code:    137

       
      Expected Results: 

       The containers under "openshift-gitops-operator-controller-manager" pod should run successfully without modifying/updating the  memory limit. 

      Actual Results:

      The manager container in "openshift-gitops-operator-controller-manager" pod goes into CrashLoopBackOff state. 
       
      Workaround (If Possible)
       
       Currently increasing the memory from the CSV helps bring the "openshift-gitops-operator-controller-manager" pod and its containers into optimal state. 

            rh-ee-sghadi Siddhesh Ghadi
            rhn-support-dtambat Darshan Tambat
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated:
              Resolved: