Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-8607

Support for the Egress Router CNI Plugin in OCP

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • None
    • Network - Core
    • None
    • None
    • Product / Portfolio Work
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      The requirement is to deploy an egress router pod that receives traffic from application pods via a Service IP and forwards it to the mapped destination through its secondary interface (net1). This secondary interface is directly linked to the node’s secondary NIC (eth1.100) using the egress-router-cni plugin.

                                ----------------------
                                |   External /         |
                                |   Mapped Destination |
                                ---------^----------
                                           |
                                           |
                               (Secondary Network)
                                           |
                           -----------------------------+
                           |          Node-1                 |
                           |                                 |
                           |  --------------------------- |
                           |  |  Node Secondary NIC       | |
                           |  |  eth1.100                 | |
                           |  ----------^-------------- |
                           |              |                 |
                           |  ------------------------+ |
                           |  |  Egress Router Pod        | |
                           |  |  (egress-router-cni)      | |
                           |  |                           | |
                           |  |  eth0 - Cluster Network   | |
                           |  |  net1 - Secondary NIC     | |
                           |  ----------^-------------- |
                           |              |                 |
                           -------------|----------------
                                          |
                                          |
                                Cluster Network (OVN)
                                          |
                           -----------------------------+
                           |          Node-2                 |
                           |                                 |
                           |  --------------------------- |
                           |  |  Application Pod          | |
                           |  |                           | |
                           |  |  -> Service IP ---------------
                           |  --------------------------- |     |
                           |                                 |     |
                           ---------------------------------     |
                                                                     |
                               Service (ClusterIP)                    |
                               maps to Egress Router Pod <-------------+

       

      Do we support[ egress-router-cni|https://github.com/openshift/egress-router-cni]{} in OCP? 

      Can we recommend this for a production OCP cluster? 

       

      I was able to achieve this using the following setup, kindly confirm the supportability of the below use case.  

       

      Pre-built Egress Router Images

      • `ghcr.io/rameshsahoo111/egressrouter-fedora:latest`
      • `ghcr.io/rameshsahoo111/egressrouter:latest`

       

      Network Attachment Definition (NAD)

      Create a NetworkAttachmentDefinition using the egress-router CNI. This attaches the node’s secondary interface to the egress router Pod.

      > Ensure that the required host-side networking (for example, VLAN interfaces) is already configured.

       

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: macvlan-net
        namespace: macvlan-demo
      spec:
        config: '{
          "cniVersion": "0.4.0",
          "type": "egress-router",
          "interfaceType": "macvlan",
          "interfaceArgs": {
              "master":"enp10s0.100"  ## Replace with the correct interface name
          },
          "ip": {
            "addresses": [
                "192.168.70.100/24"
            ],
            "gateway": "192.168.70.1"
          }
        }'

       

      Build the Egress Router Image

      Example Containerfile (RHEL UBI 10)

       

      FROM registry.access.redhat.com/ubi10/ubi
      LABEL name="ubi10-egress-router" \
            description="Egress router container using nftables to control outbound traffic" \
            io.k8s.display-name="Egress Router" \
            io.openshift.tags="egress,router,nftables,networking"
      RUN dnf install -y \
          nftables \
          net-tools \
          nano \
          curl \
          procps-ng \
          iputils \
          iproute \
          && dnf clean all
      RUN mkdir -p /usr/nft \
          && touch /usr/nft/nft.rules
      CMD ["sh", "-c", "nft -f /usr/nft/nft.rules; sleep infinity"]
      

       

      Example Containerfile (Fedora Latest)

       

      FROM registry.fedoraproject.org/fedora:latest
      LABEL name="fedora-egress-router" \
            description="Egress router container using nftables to control outbound traffic" \
            io.k8s.display-name="Egress Router" \
            io.openshift.tags="egress,router,nftables,networking"
      RUN dnf install -y \
          nftables \
          net-tools \
          nano \
          curl \
          procps-ng \
          iputils \
          tcpdump \
          iproute \
          && dnf clean all
      RUN mkdir -p /usr/nft \
          && touch /usr/nft/nft.rules
      CMD ["sh", "-c", "nft -f /usr/nft/nft.rules; sleep infinity"]
      ```
      ### Build the Image
      ```bash
      podman build -t egressrouter:latest -f Containerfile .
      

       

      NFTables ConfigMap

      This ConfigMap:

      • Applies *SNAT* using the `net1` interface IP
      • Applies *DNAT* rules in the `prerouting` chain
      • Only modify the `prerouting` section as required for your use case

       apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nftconfig
        namespace: macvlan-demo
      data:
        nft.rules: |
          #!/usr/sbin/nft -f

          flush ruleset

          table ip nat {
              chain prerouting

      {             type nat hook prerouting priority -100;             tcp dport 8080 dnat to 192.168.70.1:8080         }

              chain postrouting

      {             type nat hook postrouting priority 100;             oifname "net1" counter masquerade         }

          }

       

      Egress Router DeploymentapiVersion: apps/v1
      kind: Deployment
      metadata:
        name: egrouter
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: egrouter
        template:
          metadata:
            labels:
              app: egrouter
            annotations:
              k8s.v1.cni.cncf.io/networks: macvlan-net
          spec:
            serviceAccountName: demo
            nodeSelector:
              kubernetes.io/hostname: m2.c1.ocplabs.bm
            containers:
            - name: egrouter
              image: ghcr.io/rameshsahoo111/egressrouter:latest
              command: ["sh", "-c", "nft -f /usr/nft/nft.rules; sleep infinity"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: nftconfig
                mountPath: /usr/nft/nft.rules
                subPath: nft.rules
            volumes:
            - name: nftconfig
              configMap:
                name: nftconfig
      ```

        1. Service Configuration

      Create a Kubernetes Service that forwards traffic to the egress router Pod according to the DNAT rules.

      ```yaml
      apiVersion: v1
      kind: Service
      metadata:
        name: egress-1
      spec:
        type: ClusterIP
        selector:
          app: egrouter
        ports:
        - name: web-app
          protocol: TCP
          port: 8080
      ```

       

      Accessing the Destination via the Egress Router

      To access the destination:

      • Use the *Service (`egress-1`)* that is bound to the egress router Pod
      • Ensure the request matches the *DNAT rules* defined in the nftables ConfigMap

              mcurry@redhat.com Marc Curry
              rhn-support-rsahoo Ramesh Sahoo
              None
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                None
                None