-
Feature
-
Resolution: Unresolved
-
Critical
-
None
-
None
-
BU Product Work
-
False
-
-
False
-
OCPSTRAT-1247OpenShift Networking Universal Connectivity
-
0% To Do, 33% In Progress, 67% Done
-
0
-
Program Call
-
-
This new functionality could use TE
-
-
Red Hat OpenShift Networking
Feature Overview (aka. Goal Summary)
Support network isolation and multiple primary networks (with the possibility of overlapping IP subnets) without having to use Kubernetes Network Policies.
Goals (aka. expected user outcomes)
- Provide a configurable way to indicate that a pod should be connected to a unique network of a specific type via its primary interface.
- Allow networks to have overlapping IP address space.
- The primary network defined today will remain in place as the default network that pods attach to when no unique network is specified.
- Support cluster ingress/egress traffic for unique networks, including secondary networks.
- Support for ingress/egress features where possible, such as:
- EgressQoS
- EgressService
- EgressIP
- Load Balancer Services
Requirements (aka. Acceptance Criteria):
- Support for 10,000 namespaces
Anyone reviewing this Feature needs to know which deployment configurations that the Feature will apply to (or not) once it's been completed. Describe specific needs (or indicate N/A) for each of the following deployment scenarios. For specific configurations that are out-of-scope for a given release, ensure you provide the OCPSTRAT (for the future to be supported configuration) as well.
Deployment considerations | List applicable specific needs (N/A = not applicable) |
Self-managed, managed, or both | |
Classic (standalone cluster) | |
Hosted control planes | |
Multi node, Compact (three node), or Single node (SNO), or all | |
Connected / Restricted Network | |
Architectures, e.g. x86_x64, ARM (aarch64), IBM Power (ppc64le), and IBM Z (s390x) | |
Operator compatibility | |
Backport needed (list applicable versions) | |
UI need (e.g. OpenShift Console, dynamic plugin, OCM) | |
Other (please specify) |
Use Cases (Optional):
- As an OpenStack or vSphere/vCenter user, who is migrating to OpenShift Kubernetes, I want to guarantee my OpenStack/vSphere tenant network isolation remains intact as I move into Kubernetes namespaces.
- As an OpenShift Kubernetes user, I do not want to have to rely on Kubernetes Network Policy and prefer to have native network isolation per tenant using a layer 2 domain.
- As an OpenShift Network Administrator with multiple identical application deployments across my cluster, I require a consistent IP-addressing subnet per deployment type. Multiple applications in different namespaces must always be accessible using the same, predictable IP address.
Questions to Answer (Optional):
Out of Scope
- Multiple External Gateway (MEG) Support - support will remain for default primary network.
- Pod Ingress support - support will remain for default primary network.
- Cluster IP Service reachability across networks. Services and endpoints will be available only within the unique network.
- Allowing different service CIDRs to be used in different networks.
- Localnet will not be supported initially for primary networks.
- Allowing multiple primary networks per namespace.
- Allow connection of multiple networks via explicit router configuration. This may be handled in a future enhancement.
- Hybrid overlay support on unique networks.
Background
OVN-Kubernetes today allows multiple different types of networks per secondary network: layer 2, layer 3, or localnet. Pods can be connected to different networks without discretion. For the primary network, OVN-Kubernetes only supports all pods connecting to the same layer 3 virtual topology.
As users migrate from OpenStack to Kubernetes, there is a need to provide network parity for those users. In OpenStack, each tenant (analog to a Kubernetes namespace) by default has a layer 2 network, which is isolated from any other tenant. Connectivity to other networks must be specified explicitly as network configuration via a Neutron router. In Kubernetes the paradigm is the opposite; by default all pods can reach other pods, and security is provided by implementing Network Policy.
Network Policy has its issues:
- it can be cumbersome to configure and manage for a large cluster
- it can be limiting as it only matches TCP, UDP, and SCTP traffic
- large amounts of network policy can cause performance issues in CNIs
With all these factors considered, there is a clear need to address network security in a native fashion, by using networks per user to isolate traffic instead of using Kubernetes Network Policy.
Therefore, the scope of this effort is to bring the same flexibility of the secondary network to the primary network and allow pods to connect to different types of networks that are independent of networks that other pods may connect to.
Customer Considerations
Documentation Considerations
Interoperability Considerations
Test scenarios:
- E2E upstream and downstream jobs covering supported features across multiple networks.
- E2E tests ensuring network isolation between OVN networked and host networked pods, services, etc.
- E2E tests covering network subnet overlap and reachability to external networks.
- Scale testing to determine limits and impact of multiple unique networks.