XMLWordPrintable

    • Product / Portfolio Work
    • None
    • 100% To Do, 0% In Progress, 0% Done
    • False
    • Hide

      None

      Show
      None
    • False
    • XL
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Feature Overview (aka. Goal Summary)  

      This feature introduces explicit ICMP protocol support in OpenShift’s implementation of Kubernetes MultiNetworkPolicy (MNP). The goal is to allow administrators and users to define, permit, and restrict ICMP traffic on secondary networks in the same manner as TCP and UDP. This enables essential network diagnostics, troubleshooting, and health check capabilities for VM-based workloads attached via secondary interfaces, while maintaining consistent administrative control and policy enforcement.

      Goals (aka. expected user outcomes)

      • Enable ICMP as a first-class, explicitly referenceable protocol in MultiNetworkPolicy.
      • Ensure ICMP traffic behaves consistently across default and secondary networks when policies are applied.
      • Allow administrators to control ICMP access using AdminMultiNetworkPolicy and BaselineAdminMultiNetworkPolicy constructs.
      • Preserve customer expectations for basic connectivity testing, availability checks, and troubleshooting workflows.
      • Align OpenShift behavior with documented expectations and existing usage patterns for ICMP.

      Requirements (aka. Acceptance Criteria):

      • MultiNetworkPolicy must support ICMP as an allowed and denied protocol.
      • ICMP support must apply to secondary networks, including VM secondary interfaces.
      • Policy evaluation must respect the existing policy hierarchy (Admin, Base Admin, user-defined MNP).
      • ICMP policy behavior must be deterministic and consistent with TCP/UDP policy enforcement.
      • Documentation and examples must be updated to reflect ICMP support in MNP.

      Anyone reviewing this Feature needs to know which deployment configurations that the Feature will apply to (or not) once it's been completed.  Describe specific needs (or indicate N/A) for each of the following deployment scenarios. For specific configurations that are out-of-scope for a given release, ensure you provide the OCPSTRAT (for the future to be supported configuration) as well.

      Deployment considerations List applicable specific needs (N/A = not applicable)
      Self-managed, managed, or both  
      Classic (standalone cluster)  
      Hosted control planes  
      Multi node, Compact (three node), or Single node (SNO), or all  
      Connected / Restricted Network  
      Architectures, e.g. x86_x64, ARM (aarch64), IBM Power (ppc64le), and IBM Z (s390x)  
      Operator compatibility  
      Backport needed (list applicable versions)  
      UI need (e.g. OpenShift Console, dynamic plugin, OCM)  
      Other (please specify)  

      Use Cases (Optional):

      • Network Diagnostics: VM users validate connectivity using ping or similar ICMP-based tools.
      • Troubleshooting: Operators diagnose routing, firewall, or segmentation issues on secondary networks.
      • Health Checking: Infrastructure and monitoring systems rely on ICMP reachability checks for VM workloads.
      • Administrative Control: Cluster administrators explicitly permit ICMP via AdminMultiNetworkPolicy while restricting other protocols via user policies.
      • Segmentation Enforcement: BaselineAdminMultiNetworkPolicy explicitly denies ICMP where required for compliance or security.

      Questions to Answer (Optional):

      • How should ICMP types and codes be handled (e.g., allow all vs. granular control)?
      • Should ICMP default behavior differ when no MultiNetworkPolicy is applied?
      • How does ICMP policy enforcement interact with existing CNI and secondary network implementations?
      • Are there performance or scalability considerations when enabling ICMP policy enforcement?
      • How does this align with upstream Kubernetes NetworkPolicy behavior and limitations?

      Out of Scope

      • Redesign of the MultiNetworkPolicy API beyond adding ICMP protocol support.
      • Changes to default cluster network ICMP behavior (already supported).
      • Fine-grained ICMP type/code filtering beyond basic protocol-level allowance or denial (unless explicitly required later).
      • Non-OpenShift Kubernetes distributions or CNIs outside OpenShift support boundaries.

      Background

      Customers using OpenShift with VM workloads attached to secondary networks have observed that ICMP traffic fails when MultiNetworkPolicy is applied, despite ICMP being supported and documented for the default cluster network. ICMP is a foundational protocol for diagnostics, availability checks, and operational confidence. Existing users expect ICMP to function consistently across networking constructs and view its absence in MNP as a functional gap rather than an advanced feature request.

      Customer Considerations

      • Customers expect ICMP to “just work” for basic diagnostics unless explicitly denied by policy.
      • Administrators require parity between ICMP and other protocols in terms of policy control.
      • VM-heavy environments depend on ICMP for operational workflows and health validation.
      • Lack of ICMP support in MNP is perceived as an inconsistency or regression compared to default networking behavior.

      Documentation Considerations

      • Update OpenShift networking documentation to explicitly state ICMP support in MultiNetworkPolicy.
      • Provide clear examples of AdminMultiNetworkPolicy, MultiNetworkPolicy, and BaselineAdminMultiNetworkPolicy using ICMP.
      • Clarify expected behavior when ICMP is not explicitly permitted or denied.
      • Align documentation language with existing NetworkPolicy and security guidance.

      Interoperability Considerations

      • Verify behavior with VM workloads, pods, and mixed environments sharing secondary networks.
      • Ensure that enabling ICMP does not introduce unintended interactions with firewall rules or underlying network infrastructure.

              mcurry@redhat.com Marc Curry
              mcurry@redhat.com Marc Curry
              None
              None
              None
              None
              None
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: