Uploaded image for project: 'OCMUI - OpenShift Cluster Manager UI'
  1. OCMUI - OpenShift Cluster Manager UI
  2. OCMUI-1526

Data Sovereignty for the Cluster List Page

    • Icon: Epic Epic
    • Resolution: Unresolved
    • Icon: Critical Critical
    • ocmui2024Q2
    • None
    • A-Team
    • Data Sovereignty for the Cluster List Page
    • False
    • Hide

      None

      Show
      None
    • False
    • To Do
    • XCMSTRAT-589 - [Internal Preview] OCM console can connect to regional OCM instances to manage cluster lifecycle
    • 8% To Do, 3% In Progress, 89% Done

      Today, the OCM UI uses subscription data from AMS to populate the cluster list and for paging, filtering, and sorting. As part of the data sovereignty effort, cluster metadata other than cluster ID and Red Hat region will be scrubbed from global AMS and only stored in the Cluster Service for each region. This means console will need to fetch the data from the Cluster Service in each region and aggregate on the client to present a single, global cluster list.

      For performance, we’ll want to know what regions a particular organization uses. If a particular user can only see clusters in 3 regions, we can make requests to only those regions and combine the results on the client. This means we don’t have to fan out to dozens of endpoints when we eventually expand to more regions, which would not scale.

      Today, the OCM UI relies on the backend API to handle paging, sorting, and filtering. It only fetches the data for one page at a time. If we’re aggregating responses from multiple endpoints on the client, this is no longer possible and all paging, sorting, and filtering will need to be handled on the client. We will need to assess the impact to performance for orgs with a large number of clusters. While most organizations will have a reasonably small number of clusters, we’ll need an approach for handling organizations like Red Hat that have tens of thousands or support engineers who can see all clusters. The UI can’t fall over in these cases. We might need to limit the number of clusters we load and display in these cases and require that users filter the list to find the clusters they want.

      Work for this epic includes:

      • Fan out to endpoints and merge cluster list on the client
      • Updated filtering
      • Updated sorting
      • Gracefully handling of extremely large cluster lists

      Depends on:

      Out of scope:

      • Archived cluster list (will be tracked as a separate issue)

      We'll need to evaluate if we should create new cluster list components to avoid touching / breaking working code, or if we make the changes in-place

              kdoberst Kim Doberstein
              spadgett@redhat.com Samuel Padgett
              David Aznaurov, Dylan Cooper
              Denis Ragan Denis Ragan
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: