Uploaded image for project: 'OCMUI - OpenShift Cluster Manager UI'
  1. OCMUI - OpenShift Cluster Manager UI
  2. OCMUI-592

Refactoring plan for clusters/details redux state

    • Icon: Task Task
    • Resolution: Unresolved
    • Icon: Normal Normal
    • None
    • None
    • Core UI
    • False
    • Hide

      None

      Show
      None
    • False

      [This card is more a plan/proposal document than a task; I'm writing it in Jira for cross-linking but actual work will probably be tracked in separate cards.]

      Current redux structure

      clusters.clusters

      Represents result of last list request (subscriptions list, enriched with some data from other API).

        {
          error: false,
          pending: false,
          fulfilled: true,
          valid: true,  // allows forcing reload while still showing current list
          clusters: [
            \{…}, \{…}, \{…}, … // ClusterWithPermissions[]
          ],
          queryParams: {
            page: 1,
            page_size: 50,
            has_filters: false,
            order: 'display_name asc',
            filter: "(cluster_id!='') AND (status NOT IN ('Deprovisioned', 'Archived')) AND (display_name ILIKE '%%' OR external_cluster_id ILIKE '%%' OR cluster_id ILIKE '%%')"
          },
          meta: {}
        }
      

      clusters.details

      Represent data for "current" cluster (subscription, enriched with more APIs than the list).

      {
        details: {
          error: false,
          pending: false,
          fulfilled: true,
          valid: true,
          cluster: {
             … // AugmentedCluster (but sometimes seeded with
               //     ClusterWithPermissions subset from clusters.clusters)
           }
        }
      }

      Problems

      1. AugmentedCluster & ClusterWithPermissions are historically rooted at Cluster, with other data like .subscription tacked under it.  This is silly since we no longer fetch Cluster in some cases, we fake it instead.
      2. Success/failure is all-or-nothing.  Well we have meta.clustersServiceError for which we show an ErrorBox but most code doesn't expect those fields to be missing.
      3. SET_DETAILS action copies AugmentedCluster from list into clusters.details, providing only subset of ClusterWithPermissions fields expected there.
      4. clusters.clusters.queryParams set after response, which makes it impossible to know which request(s) are pending. 
      5. clusters.clusters.queryParams not set on error, which makes it impossible to know which request(s) errored.
      6. clusters.clusters has place for one "current list".  When typing search text/filtering, data for clusters outside current response is forgotten.  This adds avoidable latency when clearing the search/filter.
      7. clusters.clusters keeps no request timestamp(s), which would be helpful for more complicated caching
      8. clusters.details.subscription.id set after response,  which makes it impossible to know which request(s) are pending.
      9. clusters.details.subscription.id not set on error, which makes it impossible to know which request(s) errored. (e.g. caused HAC-2922, see there about some very weird code we have now that's checking the stored id vs. desired id but the check being logically wrong, probably as compensation for the stored id being stale on errors?!)
      10. clusters.details has place for one "current cluster".  This adds avoidable latency when switching between clusters (e.g. HAC-2923)
      11. clusters.details keeps no request timestamp(s), which would be helpful for more complicated caching.

      Endgame: ideas for most flexible representation

      • Have a flat storage of cluster data keyed by id.
      • Each value containing separate sub-object for each backend request (some optional), with separate error/pending details.
        • Reducer will compute an overall fulfilled/pending/error.
      • Keep timestamp with each cluster data.  (and maybe per-request sub-objects?)
      • When a cluster list response arrives: store clusters' data into the flat storage by id, updating their status & timestamp; separately store just the list of returned ids + timestamp, keyed by queryParams.
        • So when rapidly editing search string / filters, we may display cached list if we have it, with recent enough timestamp (and/or hide latency by client-side filtering?).
      • When a cluster detail responses arrive: update data + status + timestamp in flat storage.
      • Don't store which is "current cluster" in redux!  This is routing responsibility, represented in URL.
        • SET_DETAILS becomes a no-op: if the current details/... URL points to a cluster that's already in the flat by-id storage, use it.
      • Periodically prune data with old timestamps.

      Stepwise plan

      1. Write cypress tests for various scenarios around 404 errors like HAC-2922.
      2. Pass queryParams and subscription ID via meta, which is also passed to PENDING  & _REJECTED actions.  Initially store this _in addition to existing queryParams to avoid unexpected effects on ClusterList?
      3. Add selector for "current cluster" which takes desired subscription ID.  Encasulate the check vs. stored ID here!
      4. Pass desired subscription ID from ClusterDetails down to sub-components (that presently just look at clusters.details).  Have them use the new selector.
      5. ... re-evaluate priorities around this point.  Possibly attack fakeCluster nonsense, and/or demote the optional clusters-service data to a sub-object? ...
      6. Try moving clusters.details to per-cluster-by-id storage?  In theory might be transparent change?  But remember need for pruning.# Add selector for "current cluster list" encapsulating current vs. stored queryParams?
      7. Change ClusterListTable etc. to use new selector.
      8. Try moving clusters to by-queryParams storage?  Remember need for pruning.

              Unassigned Unassigned
              bpaskinc@redhat.com Beni Paskin-Cherniavsky (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: