Uploaded image for project: 'OpenShift Pipelines'
  1. OpenShift Pipelines
  2. SRVKP-827

Pipeline resource quota

XMLWordPrintable

    • Icon: Epic Epic
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • Tekton Pipelines
    • Pipeline resource quota
    • To Do
    • SECFLOWOTL-128 - Pipeline level resource quota for Tekton
    • 50% To Do, 0% In Progress, 50% Done

      Goal
      As an app operator, I want to specify the max amount of cpu/memory that a particular pipeline can consume so that the the sum of concurrent pods running for a specific pipelinerun associated with that pipeline do not exceed the quota set for that particular pipeline.

      Problem
      Tekton admins cannot define resource quota per pipeline in order to have granular control over how much resources (cpu, mem) a particular pipeline is allowed to consume on the cluster.

      Using LimitRange does not help since it cannot distinguish pods running for the pipeline from the rest of the pods. For example a maven build task in a pipeline requires much more resources than the pod for the resulting application. App operators needs a way to be able to give enough quota for the maven build for that application, without giving too much quota to all pods in that namespace.

      A per-Task resource limits does not help either since Task resource usage is dependent on what the task is being used for. A Maven task for two different applications would take vastly different amount of resources and a Task author cannot pre-determine that.

      As a work around, customers currently define Task params for cpu and mem for all their tasks, and then define matching cpu and mem params with default values on their pipelines to be able to achieve the above scenario.

      Estimation

      M

              Unassigned Unassigned
              rh-ee-ssadeghi Siamak Sadeghianfar
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated: