Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-2906

changing/adding a custom configuration to the OpenShift KubeScheduler Operator or somewhere else directly on the scheduler itself

XMLWordPrintable

    • False
    • None
    • False
    • Not Selected
    • 0
    • 0% 0%

      1. Proposed title of this feature request

      • As the new default spreading implementation for OpenShift 4.7 and forward is based on PodTopologySpread it would be very good if a cluster administrator could overwrite the inbuilt "codebased system defaults" - which in JSON configuration looks something like attachment:
         
        Is this supported by changing/adding a custom configuration to the OpenShift KubeScheduler Operator or somewhere else directly on the scheduler itself? Could Red Hat support maybe point to how that in detail would be possible.

      The documented use of "Scheduler Profiles":
      https://docs.openshift.com/container-platform/4.10/nodes/scheduling/nodes-scheduler-profiles.html

      2. What is the nature and description of the request?
       - changing/adding a custom configuration to the OpenShift KubeScheduler Operator or somewhere else directly on the scheduler itself

      3. Why does the customer need this? (List the business requirements here)
      CU - If it is not possible, on a cluster level, to change the default scheduler with controls that understands the topology of the cluster with respect to failure zones and criticality of workloads I consider it a huge step-back from what always have been present in OpenShift 3 and up to OpenShift 4.9, where scheduling policies on cluster levels could be changed.

      Red Hat have in OpenShift product, from version 4.10 removed Kubernetes cluster level scheduling customizability and is not providing any means to bring it back. So I am not even sure this would classify as a feature request/enhancement, as this functionality have been present throughout the whole lifespan of the product. We do understand though that the old scheduling method and configuration are getting removed, but should have been replaced with access to the Kubernetes present scheduling in full through configuration.

      The clusters that we run are multi-tenant clusters where projects/teams are supposed to have their workloads scheduled and run by the OpenShift in the way that OpenShift by its configuration understands and supports the layout/topology of the datacenter in which the OpenShift cluster is running.

      We have an contract with the project/teams using the OpenShift environment that all they have to take care of is "number of pods" - then the resilience, spreading and failure zone support of the cluster takes care of running the workloads on a best effort.

      We want to keep that contract and will not like these many teams to know about the topology and spreading logic so we can't be without a more meaningful cluster level scheduling policy.

      We are migrating to 4.10 from 3.11 just now, very late we understand, but we do not want to add to the burden of that migration with having all teams add scheduling configuration to their workloads.

      We are running national critical workloads on these environments, and having access to this scheduling configuration is crucial to ensure that major failures on zones are taken care of by the clusters inbuilt functionality and such scheduling configuration.

            gausingh@redhat.com Gaurav Singh
            rhn-support-bshaw Bikash Shaw
            Votes:
            2 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: