Uploaded image for project: 'Observability and Data Analysis Program'
  1. Observability and Data Analysis Program
  2. OBSDA-112

Tune maxUnavailable of 'collector' daemonset for reducing upgrade time

XMLWordPrintable

    • False
    • False
    • 0% To Do, 0% In Progress, 100% Done

      > 1. What is the nature and description of the request (What is affecting as of now? What usecase/results will get if the request is implemented)?

      The maxUnavailable of daemonsets of OpenShift core components were modified in order to reduce the upgrade time, but OpenShift Logging's 'collector' daemonset still uses 1 as a value of maxUnavailable.

      There is no reason to leave only it unchanged.
      Please update maxUnavailable of 'collector' daemonset as well as other daemonsets.

      > 2. Why does the customer need this? (List the business requirements here)

      To reduce the upgrade time is important in order to ease customer's burden for upgrade.

      If maxUnavailable is 1, the time for rolling update of daemonset becomes longer in proportion as the number of nodes increases.
      It makes our customers to hesitate to add more nodes to their cluster. It means that Red Hat and we loses the chance to sell more OpenShift subscription to our customers.

      Red Hat has changed 'maxUnavailable' of many daemonsets from 1 to 10% or more for reducing upgrade time of OpenShift.

      Now OpenShift upgrade time had been reduced.
      On the other hand, OpenShift Logging 5.3 still sets maxUnavailable of its daemonset(collector) to 1 even now.

      $ oc get ds -A -o yaml | grep -e "^ name:" -e "^ namespace:" -e maxUnavailable

      ...
      name: collector
      namespace: openshift-logging
      maxUnavailable: 1
      ...

      To reduce the upgrade time of OpenShift Logging, it should be updated to 10% or more as well as other daemonsets.

              jamparke@redhat.com Jamie Parker
              rhn-support-adsoni Aditya Soni (Inactive)
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: