Uploaded image for project: 'OpenShift GitOps'
  1. OpenShift GitOps
  2. GITOPS-7180

Redis HA Proxy pod fails to start with Security Context error

XMLWordPrintable

    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • Hide
      Redis HA Proxy Pod Stuck on Upgrade-
      Before this update, when upgrading the OpenShift GitOps Operator from v1.15.x to v1.16.x or later with High Availability (HA) enabled, the redis-ha-proxy pod would get stuck during the automatic rollout. This update fixes the issue by modifying the Operator's resource management logic to ensure the necessary permissions and security context fields are properly reconciled during the upgrade process. Now, the redis-ha-proxy pod rolls out successfully and automatically when the OpenShift GitOps Operator is upgraded, removing the need for manual intervention like deleting the Deployment.
      Show
      Redis HA Proxy Pod Stuck on Upgrade- Before this update, when upgrading the OpenShift GitOps Operator from v1.15.x to v1.16.x or later with High Availability (HA) enabled, the redis-ha-proxy pod would get stuck during the automatic rollout. This update fixes the issue by modifying the Operator's resource management logic to ensure the necessary permissions and security context fields are properly reconciled during the upgrade process. Now, the redis-ha-proxy pod rolls out successfully and automatically when the OpenShift GitOps Operator is upgraded, removing the need for manual intervention like deleting the Deployment.
    • GitOps Crimson Sprint 18, GitOps Crimson Sprint 21

      Description of Problem

       

      The `redis-ha-haproxy` pod fails to start after upgarding the GitOps operator from v1.15.0 to v1.16.z. Below error is observed in events:

      2s          Warning   FailedCreate        replicaset/openshift-gitops-redis-ha-haproxy-xxxxx            Error creating: pods "openshift-gitops-redis-ha-haproxy-xxxxx-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group, provider restricted-v2: .initContainers[0].runAsUser: Invalid value: 1000: must be in the ranges: [1000720000, 1000729999], provider restricted-v2: .containers[0].runAsUser: Invalid value: 1000: must be in the ranges: [1000720000, 1000729999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
       

       

       

      Additional Info

       

      Problem Reproduction

      1.  Enable HA in ArgoCD CR
      2. Upgrade GitOps operator from v1.15.0 to v1.16.z
      3. Delete the old `redis-ha-haproxy` pod
      4. Check the namespace events for the error
      5.  Delete the `redis-ha-haproxy` deployment to fix the issue.

       

      Reproducibility

      • Always

      Prerequisites/Environment

      •  

      Steps to Reproduce

      1.  Enable HA in ArgoCD CR
      2. Upgrade GitOps operator from v1.15.0 to v1.16.z
      3. Delete the old `redis-ha-haproxy` pod
      4. Check the namespace events for the error
      5.  Delete the `redis-ha-haproxy` deployment to fix the issue.

      Expected Results

      • redis-ha-haproxy pod starts successfully after upgrade.

      Actual Results

      • redis-ha-haproxy pod fails to start.

      Problem Analysis

      • <Completed by engineering team as part of the triage/refinement process>

      Root Cause

      • <What is the root cause of the problem? Or, why is it not a bug?>

      Workaround (If Possible)

      Deleting the deployment `redis-ha-haproxy` , fixes the issue and the pod starts successfully.

      Fix Approaches

      • <If we decide to fix this bug, how will we do it?>

      Acceptance Criteria

      • ...

      Definition of Done

      • Code Complete:
        • All code has been written, reviewed, and approved.
      • Tested:
        • Unit tests have been written and passed.
        • Ensure code coverage is not reduced with the changes.
        • Integration tests have been automated.
        • System tests have been conducted, and all critical bugs have been fixed.
        • Tested and merged on OpenShift either upstream or downstream on a local build.
      • Documentation:
        • User documentation or release notes have been written (if applicable).
      • Build:
        • Code has been successfully built and integrated into the main repository / project.
        • Midstream changes (if applicable) are done, reviewed, approved and merged.
      • Review:
        • Code has been peer-reviewed and meets coding standards.
        • All acceptance criteria defined in the user story have been met.
        • Tested by reviewer on OpenShift.
      • Deployment:
        • The feature has been deployed on OpenShift cluster for testing.

              rhn-support-alkumari Alka Kumari
              rhn-support-jyarora Jyotsana Arora
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: