Uploaded image for project: 'Operator Runtime'
  1. Operator Runtime
  2. OPRUN-2182

Move guest cluster olm components to run in the control plane NS in a hypershift topology

XMLWordPrintable

    • OLM hypershift enablement
    • BU Product Work
    • False
    • False
    • Done
    • OCPSTRAT-112 - Define & Implement HyperShift Network Topology & Component Locality
    • OCPSTRAT-582Add OLM to HyperShift Controlplane
    • 0% To Do, 0% In Progress, 100% Done
    • Undefined
    • XL

      OCP/Telco Definition of Done
      Epic Template descriptions and documentation.

      Epic Goal

      • All OLM components run in the control plane namespace for hypershift deployed guest clusters.
      • This includes the default catalog pods, but not user defined catalogs
      • The set of default catalogs can be controlled by the provisioner of the guest cluster (not necessarily the admin of the guest cluster)
      • OLM blocks guest cluster upgrades based on installed olm operator compatibility

      Why is this important?

      • Because hypershift is an important deployment topology for openshift, and user resources (worker node resources) should not be consumed by openshift core components, therefore they must run in the control-plane NS where the customer will not be paying for their consumption or be exposed to their implementation details.

      Scenarios

      1. As a user of a guest cluster, I should not see any OLM components (operators, controllers, default catalog pods) running on the guest worker nodes

      Acceptance Criteria

      • CI - MUST be running successfully with tests automated
      • Release Technical Enablement - Provide necessary release enablement details and documents.
      • All OLM component pods run in the control plane NS, not on the worker nodes
      • The set of default catalogs can be controlled by the deployer/maintainer of the guest cluster (not necessarily the guest cluster admin)
      • Guest cluster admins can add additional catalogs as normal
      • Guest cluster admins can install/upgrade/uninstall operators as normal
      • OLM metrics are reported for each guest cluster
      • OLM components are not installed/managed by the CVO in hypershift topologies
      • Guest cluster upgrades are blocked when an installed OLM operator sets a max openshift version that is less than or equal to the current guest cluster version.

      Dependencies (internal and external)

      1. N/A

      Previous Work (Optional):

      1. Ben/Evan have PoCed this work:
        https://github.com/openshift/hypershift/pull/207
        https://github.com/openshift/operator-framework-olm/pull/69
        https://github.com/operator-framework/operator-marketplace/pull/396

      Open questions::

      1. Implications for the new operator API
      2. implications for descoped operators

      Done Checklist

      • CI - CI is running, tests are automated and merged.
      • Release Enablement <link to Feature Enablement Presentation>
      • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
      • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
      • DEV - Downstream build attached to advisory: <link to errata>
      • QE - Test plans in Polarion: <link or reference to Polarion>
      • QE - Automated tests merged: <link or reference to automated tests>
      • DOC - Downstream documentation merged: <link to meaningful PR>

              estroczy@redhat.com Eric Stroczynski (Inactive)
              bparees@redhat.com Ben Parees
              Jian Zhang Jian Zhang
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

                Created:
                Updated:
                Resolved: