Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-3899

Node port access to pod with multus broken after upgrading from 4.8 to 4.10 as the route isn't created anymore

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Done
    • Icon: Undefined Undefined
    • openshift-4.14
    • None
    • SDN
    • None
    • False
    • None
    • False
    • Not Selected
    • x86_64

      Node port access to pod with multus broken after upgrading from 4.8 to 4.10. After manually inserting the route, the issue has been resolved.

      Cluster Version: 4.10.20

      Infrastructure:

      CNI=OVNKubernetes with gatewaymode=Local
      Platform: None
      Install Type: UPI

      Observation(s)

      The customer is using some pods for which they have a multus bridge interface configured to allow the pod to establish connections to external networks.

      They have implemented NodePort services (externalTrafficPolicy=Cluster) pointing to these pods. This allows access to the workloads from outside of Cluster.

      Here is the routing table:

      ```
      $ sudo ip netns exec $netns ip route
      default via 140.223.56.1 dev net1
      140.223.56.0/24 dev net1 proto kernel scope link src 140.223.56.203
      172.19.0.0/20 via 172.19.10.1 dev eth0
      172.19.10.0/26 dev eth0 proto kernel scope link src 172.19.10.5
      192.168.48.0/20 via 172.19.10.1 dev eth0
      ```
      Now, the source IP address of the traffic coming in from the nodeport is 100.64.0.4. Since there is no entry in the routing table for this address it will use the default route which takes the traffic to an external network which is not able to route this 100.64.0.4 address.

      The customer tried to manually add a route entry to this pod like this:

      ```
      $ sudo ip netns exec $netns ip route add 100.64.0.0/10 via 172.19.10.1 dev eth0

      $ sudo ip netns exec $netns ip route
      default via 140.223.56.1 dev net1
      100.64.0.0/10 via 172.19.10.1 dev eth0
      140.223.56.0/24 dev net1 proto kernel scope link src 140.223.56.203
      172.19.0.0/20 via 172.19.10.1 dev eth0
      172.19.10.0/26 dev eth0 proto kernel scope link src 172.19.10.5
      192.168.48.0/20 via 172.19.10.1 dev eth0
      ```

      After doing that, the traffic works fine. Here is netstat showing the source IP address of the NodePort traffic:

      ```
      $ sudo ip netns exec $netns netstat -nat
      Active Internet connections (servers and established)
      Proto Recv-Q Send-Q Local Address Foreign Address State
      tcp 0 0 172.19.10.5:8080 0.0.0.0:* LISTEN
      tcp 0 0 172.19.10.5:8080 100.64.0.4:33818 ESTABLISHED
      ```

      I have been suggested to raise this as an RFE. Please let me know if this is a feasible request

            mcurry@redhat.com Marc Curry
            rhn-support-adubey Akash Dubey
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: