Uploaded image for project: 'OpenShift Service Mesh'
  1. OpenShift Service Mesh
  2. OSSM-898

[TP] Cluster Wide Mesh

XMLWordPrintable

    • Icon: Epic Epic
    • Resolution: Done
    • Icon: Major Major
    • OSSM 2.3.0
    • None
    • Maistra
    • None
    • Cluster Wide Mesh Installation
    • False
    • False
    • doc_ack
    • Documentation (Ref Guide, User Guide, etc.), Release Notes, User Experience
    • To Do
    • 0% To Do, 0% In Progress, 100% Done
    • Dev Preview
    • Technology Preview
    • Done

      Currently, Service Mesh supports a multi-tenant topology to support multiple service mesh control planes operating within a single cluster.

      While this enables a single cluster to support multiple teams - or tenants, with each tenant having network isolation, in the case where a cluster is to be used on a single mesh with many namespaces, it can make configuration difficult as the service mesh member role (SMMR) must now contain a list of all namespaces in the cluster. This can be a very large list - several hundred namespaces.

      The challenge with a large SMMR is not just administrative, but it means that our control plane must reconcile configuration across hundreds of namespaces, creating an explosion of complexity. This can impact the performance of reconciliation, and potentially for the mesh itself.

      This is also a significant divergence from upstream - which only supports a cluster wide install. We have seen that many of our customers have no intention of using service mesh in a multi-tenant manner and would be satisfied with a cluster-wide installation.

      This EPIC is to start collecting information on a cluster-wide option for OSSM. Customers would still have the multi-tenant option.

       

              Unassigned Unassigned
              jlongmui@redhat.com Jamie Longmuir
              Votes:
              1 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated:
                Resolved: