Uploaded image for project: 'OpenShift GitOps'
  1. OpenShift GitOps
  2. GITOPS-2404

Redis HA Pods Not Updating CPU and Memory Resources in GitOps Operator

XMLWordPrintable

    • 8
    • False
    • None
    • False
    • Hide
      Before this update, when the Operator was deployed and running with the High availability (HA) feature enabled, setting resource limits under the `.spec.ha.resources` field did not affect Redis HA pods. This update fixes the reconciliation by adding checks in the Redis reconciliation code. These checks ensure whether the `spec.ha.resources` field in the Argo CD custom resource (CR) is updated. When the Argo CD CR is updated with new CPU and memory requests or limit values for HA, now these changes are applied to the Redis HA pods.
      Show
      Before this update, when the Operator was deployed and running with the High availability (HA) feature enabled, setting resource limits under the `.spec.ha.resources` field did not affect Redis HA pods. This update fixes the reconciliation by adding checks in the Redis reconciliation code. These checks ensure whether the `spec.ha.resources` field in the Argo CD custom resource (CR) is updated. When the Argo CD CR is updated with new CPU and memory requests or limit values for HA, now these changes are applied to the Redis HA pods.
    • GITOPS Sprint 238

      Issue Description:

      When the GitOps operator is deployed and running with high availability (HA) in an Openshift 4.10 environment using Openshift-Gitops version openshift-gitops-operator.v1.6.2 or higher, there is an issue where increasing the CPU and memory resources for the ArgoCD HA resources (redis-ha-server-X) does not take effect. This results in the pods getting OOMKilled due to resource constraints.

      Procedure to Reproduce the Issue:

      • Deploy the GitOps Operator version 1.6.2 or higher. An ArgoCD custom resource (CR) is created by default.
      • Enable HA for Argo CD and specify the desired CPU and memory resources in the ha section of the argocd CR.
      # Current values
      
      # oc get argocd openshift-gitops -o=jsonpath='{.spec.ha}' | jq
      {
        "enabled": true,
        "resources": {
          "limits": {
            "cpu": "500m",
            "memory": "256Mi"
          },
          "requests": {
            "cpu": "250m",
            "memory": "128Mi"
          }
        }
      } 
      
      # oc get pod openshift-gitops-redis-ha-server-0 -o=jsonpath='{range .spec.containers[*]}{.name}: {.resources}{"\n"}{end}'
      redis: {"limits":{"cpu":"500m","memory":"256Mi"},"requests":{"cpu":"250m","memory":"128Mi"}}
      sentinel: {"limits":{"cpu":"500m","memory":"256Mi"},"requests":{"cpu":"250m","memory":"128Mi"}}
      # Enable HA and update cpu/memory values # oc patch argocd openshift-gitops -n openshift-gitops --type=json -p='[{"op": "replace", "path": "/spec/ha", "value": {"enabled": true, "resources": {"limits": {"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "500m", "memory": "256Mi"}}}}]'
      

      Despite updating the CPU and memory resources in the argocd CR and pods are restarted, the Redis HA pods do not reflect the changes and continue to experience OOMKilled errors.

      Expected Behavior:

      The expectation is that when the Argocd CR is updated with new CPU and memory request/limit values for HA, those changes should be applied to the Redis HA pods. This would allow the pods to run without encountering OOMKilled errors and ensure sufficient resources for their operation.

       

      Workaround:

      A workaround has been applied to mitigate the issue. The following steps outline the workaround:

      • Disable HA for Argo CD by modifying the ha section of the argocd CR and setting enabled to false.
      # oc patch argocd openshift-gitops -n openshift-gitops --type=json -p='[{"op": "replace", "path": "/spec/ha", "value": {"enabled": false, "resources": {"limits": {"cpu": "1", "memory": "512Mi"}, "requests": {"cpu": "500m", "memory": "256Mi"}}}}]' 
      • Increase the CPU and memory limits for the Redis pods in the redis section of the argocd CR.
      # oc patch argocd openshift-gitops -n openshift-gitops --type=json -p='[{"op": "replace", "path": "/spec/redis/resources/limits/cpu", "value": "2"}, {"op": "replace", "path": "/spec/redis/resources/limits/memory", "value": "2Gi"}, {"op": "replace", "path": "/spec/redis/resources/requests/cpu", "value": "1"}, {"op": "replace", "path": "/spec/redis/resources/requests/memory", "value": "2Gi"}]' 
      • Specify the same resources as HA in the ha section of the argocd CR.
      # oc patch argocd openshift-gitops -n openshift-gitops --type=json -p='[{"op": "replace", "path": "/spec/ha", "value": {"enabled": true, "resources": {"limits": {"cpu": "2", "memory": "2Gi"}, "requests": {"cpu": "1", "memory": "2Gi"}}}}]'

      By following this workaround, the Redis HA pods should be able to utilize the increased CPU and memory resources, effectively resolving the OOMKilled errors.

            rh-ee-sghadi Siddhesh Ghadi
            rhn-support-jclaretm Jorge Claret Membrado
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: