Uploaded image for project: 'Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces) '
  1. Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces)
  2. CRW-6232

Add pod placement capabilities for devworkspace-webhook-server and make it more robust

XMLWordPrintable

    • False
    • None
    • False
    • Release Notes
    • Hide
      = Add pod placement capabilities for devworkspace-webhook-server and make it more robust

      With this release, the devworkspace-webhook-server deployment options are available in the global DevWorkspaceOperatorConfig (DWOC) including: link:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#replicas[replicas], link:https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[pod tolerations] and link:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector[nodeSelector].

      These configuration options exist in the global DWOC's `config.webhook` field:

      [source, yaml]
      ----
      apiVersion: controller.devfile.io/v1alpha1
      kind: DevWorkspaceOperatorConfig
      metadata:
        name: devworkspace-operator-config
        namespace: $OPERATOR_INSTALL_NAMESPACE
      config:
       routing:
         clusterHostSuffix: 192.168.49.2.nip.io
         defaultRoutingClass: basic
       webhook:
         nodeSelector: <string, string>
         tolerations: <[]tolerations>
         replicas: <int32>
      ----

      [NOTE]
      ====
      In order for the devworkspace-webhook-server configuration options to take effect:

      * You must place them in the link:https://github.com/devfile/devworkspace-operator?tab=readme-ov-file#global-configuration-for-the-devworkspace-operator[global DWOC], which has the name `devworkspace-operator-config` and exists in the namespace where the DevWorkspaceOperator is installed. If it does not already exist on the cluster, you must create it.

      * You must terminate the devworkspace-controller-manager pod and restart it so that the devworkspace-webhook-server deployment can be adjusted accordingly.

      Additionally, the default replica count for the devworkspace-webhook-server deployment has been increased to 2 to increase availability.
      ====
      Show
      = Add pod placement capabilities for devworkspace-webhook-server and make it more robust With this release, the devworkspace-webhook-server deployment options are available in the global DevWorkspaceOperatorConfig (DWOC) including: link: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#replicas [replicas], link: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ [pod tolerations] and link: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector [nodeSelector]. These configuration options exist in the global DWOC's `config.webhook` field: [source, yaml] ---- apiVersion: controller.devfile.io/v1alpha1 kind: DevWorkspaceOperatorConfig metadata:   name: devworkspace-operator-config   namespace: $OPERATOR_INSTALL_NAMESPACE config:  routing:    clusterHostSuffix: 192.168.49.2.nip.io    defaultRoutingClass: basic  webhook:    nodeSelector: <string, string>    tolerations: <[]tolerations>    replicas: <int32> ---- [NOTE] ==== In order for the devworkspace-webhook-server configuration options to take effect: * You must place them in the link: https://github.com/devfile/devworkspace-operator?tab=readme-ov-file#global-configuration-for-the-devworkspace-operator [global DWOC], which has the name `devworkspace-operator-config` and exists in the namespace where the DevWorkspaceOperator is installed. If it does not already exist on the cluster, you must create it. * You must terminate the devworkspace-controller-manager pod and restart it so that the devworkspace-webhook-server deployment can be adjusted accordingly. Additionally, the default replica count for the devworkspace-webhook-server deployment has been increased to 2 to increase availability. ====
    • Enhancement
    • Done

      While pod placement for devspaces-operator and devworkspace-controller-manager can be managed via Subscription object, the settings are not applied for devworkspace-webhook-server meaning pod placement for devworkspace-webhook-server can not be controlled.

      Yet enterprise customers have the desire to run certain Opertor and Controller pods on specific OpenShift Container Platform 4 - Node(s) based on labels and taints but are unable to control that for devworkspace-webhook-server, meaning the pod can be scheduled on pretty much any OpenShift Container Platform 4 - Node kubernetes considered suitable/feasible.

      It therefore is requested to have a way that helps to control pod placement for devworkspace-webhook-server and therefore allow more control for enterprise customers to place the pod on the desired OpenShift Container Platform 4 - Node(s).

      Acceptance criteria should be the following:

      • Possibility to specify nodeSelector for devworkspace-webhook-server
      • Possibility to specify toleration for devworkspace-webhook-server
      • Replica count for devworkspace-webhook-server should be set to 2

      Given that devworkspace-webhook-server is critical for the functionality of the entire OpenShift Container Platform 4 - Cluster (it's triggered when exec or rsh is run), the devworkspace-webhook-server Deployment should run with at least 2 replicas spread across different OpenShift Container Platform 4 - Node in order to prevent OpenShift Container Platform 4 - API disruption when one of the pod is restarting.

      Currently, there is disruption seen when the pod is being restarted/rescheduled, which is not acceptable in terms of SLA granted by enterprise customers via OpenShift Container Platform 4 - API.

              aobuchow Andrew Obuchowicz
              rhn-support-sreber Simon Reber
              Oleksii Orel Oleksii Orel
              Jana Vrbkova Jana Vrbkova
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: