Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-59964

Egress IP dual stack does not work using EgressIP object per IP family

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • 5
    • Important
    • None
    • x86_64
    • None
    • None
    • OSDOCS Sprint 276, OSDOCS Sprint 277, OSDOCS Sprint 278
    • 3
    • contract-priority
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:
      In a dual stack ipv4/ipv6 Openshift cluster, deploying EgressIP object per ip family (a EgressIP object for ipv4 and another one for ipv6) is not working properly.
      If I had two EgressIP objects (one for IPv4 and one for IPv6), then the first applied EgressIP CR took effect, the 2nd didnt. Tried to first apply EgressIP for IPv6 first then IPv4, in this case IPv6 SNAT took effect, if IPv4 SNAT was applied first then that took effect first. For the EgressIP which not took effect, SNAT is done using machineNetwork IP address of the worker node where pod is deployed instead of the Egress IP address.

      Version-Release number of selected component (if applicable):
      OpenShift 4.16.42 - BareMetal, OVN
      OpenShift 4.18.10 - BareMetal, OVN

      How reproducible:
      it is systematic

       

      Steps to Reproduce:

      1. Deploy OCP in dual stack mode, with two worker node roles: appworker and gateway.

      Workload/Pod is deployed on appworker (regular worker nodes, no taints)

      gateway nodes are tainted, no workloads are expected here, their purpose is to handle non-multus based ingress (MetalLB) and non-multus egress using EgressIP.

      the two gateway nodes are labeled w/ k8s.ovn.org/egress-assignable. 
      vlan interface configured on secondary interface using nmstate  with IPV4/IPV6 for egressIP purpose and default route 

         routes:
           config:
           - destination: 0.0.0.0/0
             metric: 999
             next-hop-address: 192.168.118.1
             next-hop-interface: vlan94
             table-id: 254
           - destination: ::/0
             metric: 999
             next-hop-address: 2600:52:7:94::1
             next-hop-interface: vlan94
             table-id: 254
      

       

      2. Create a EgressIP for IPV4 with namespaceSelector and podSelector
      deploy a pod on a appworker node and inside the pod try to reach an external system like http server (http server ipv4 address ) outside on OCP using curl for example

      kind: EgressIP
      metadata:
        name: egressip-ipv4-vlan94
      spec:
        egressIPs:
          - 192.168.118.30
          - 192.168.118.31
          - 192.168.118.32
        namespaceSelector:
          matchLabels:
            env: qa
        podSelector:
          matchLabels:
            egressip: ds  

      Pod manifest 

      apiVersion: v1
      kind: Pod
      metadata:
        name: fedora-egressip-pod-ds
        namespace: test
        labels: 
          egressip: ds
          egressipv4v6: ipv4v6
      spec:
        containers:
        - name: fedora-curl
          image: quay.io/yogananth_subramanian/fedora-tools:latest
          command: ["/bin/bash", "-c", "sleep infinity"]
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]
            privileged: true
        nodeSelector:
          node-role.kubernetes.io/appworker: ""  

      curl request inside the pod to the http server outside ocp using ipv4

      curl http://192.168.120.11:8080 

      HTTP server log

      # tail -f /var/log/httpd/access_log                            192.168.118.30 - - [30/Jul/2025:07:20:22 -0400] "GET / HTTP/1.1" 403 5909 "-" "curl/7.51.0"

      192.168.118.30 is egressip addres.

      3. Create a EgressIP for IPV6
      Inside the same pod try to reach an external system like http server (http server ipv6 address) outside on OCP using curl for example 

      apiVersion: k8s.ovn.org/v1
      kind: EgressIP
      metadata:
        name: egressip-ipv6-vlan94
      spec:
        egressIPs:
          - 2600:52:7:94::30
          - 2600:52:7:94::31
          - 2600:52:7:94::32
        namespaceSelector:
          matchLabels:
            env: qa
        podSelector:
          matchLabels:
            egressip: ds  
      curl http://[2600:52:7:120::9]:8080 

      HTTP server log

      # tail -f /var/log/httpd/access_log                           2600:52:7:120::16 - - [28/Jul/2025:17:33:35 -0400] "GET / HTTP/1.1" 403 5909 "-" "curl/7.51.0"

      2600:52:7:120::16 is appworker machineNetwork IP address.

       

      Actual results:
      SNAT is done using Egress IP address only for the first EgressIP object applied.  

      Expected results:
      for both EgressIP IPV4 and IPV6, SNAT is done using EgressIP address.

       

      Affected Platforms:
      OCP deployed on Baremetal - ovn kubernetes using ZTP/GitOps approach
      partner Lab

      Additional Info:
      Combining IPV4 and IPV6 in same EgressIP seems working If I have one-one IPs from both stacks then in one EgressIP object then it works, POD address is snat with egressIP as expected.

      [root@nokia-blueprint-jumphost egressIP]# cat egress-dual-stack.yaml 
      apiVersion: k8s.ovn.org/v1
      kind: EgressIP
      metadata:
        name: egressip-dual-vlan94
      spec:
        egressIPs:
          - 192.168.118.30
          - 2600:52:7:94::30
        namespaceSelector:
          matchLabels:
            env: qa
        podSelector:
          matchLabels:
            egressip: ds  

      Is it an

      1. customer issue / SD
      2. internal RedHat testing failure

      we have a dual-stack environment to mimic any additional test OCP EgressIP engineering team requires to do.
      Partner is also facing the same issue in their environment.

              dfitzmau@redhat.com Darragh Fitzmaurice
              ecisse@redhat.com El Hadji Sidi Ahmed Cisse
              None
              None
              Jean Chen Jean Chen
              None
              Votes:
              0 Vote for this issue
              Watchers:
              20 Start watching this issue

                Created:
                Updated: