Uploaded image for project: 'OpenShift Pod Autoscaling'
  1. OpenShift Pod Autoscaling
  2. PODAUTO-190

Allow cluster admins to scale and control which nodes Cluster Resource Override runs on

XMLWordPrintable

    • BU Product Work
    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • OCPSTRAT-1427 - Ability to run CRO on infra/worker node ( this will also enable CRO to run in HCP)
    • PODAUTO - Sprint 256, PODAUTO - Sprint 257

      As a Hypershift cluster admin, I would like to be able to use Cluster Resource Override and run it on my designated infra nodes. Additionally, in my traditional clusters, I would like to have the ability to run it on the nodes of my choice. Finally, I want to be able to control the number of replicas in the deployment.

      Engineering details:

      For CMA and VPA, we provide fields in the corresponding controller custom resource definition so that when the controller CR is created, the cluster admin can specify a node selector and tollerations for node taints. Here is the PR showing how it was done for VPA: https://github.com/openshift/vertical-pod-autoscaler-operator/pull/162

      For scaling the deployment, we can either add another field to the CR which directly controls the number of replicas, or we can update the operator to ignore changes to the number of replicas so that cluster admins can directly scale the deployment which is otherwise controlled by the operator.

      Acceptance criteria:

      • A new card has been created for docs with instructions and examples of how to use the new feature
      • A new card has been created for QE with suggestions for one or more relevant test cases
      • The CRO is able to be run on worker nodes
      • The CRO is able to be scaled to an arbitrary number of replicas

              rh-ee-macao Max Cao
              joelsmith.redhat Joel Smith
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: