Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-39157

[Pre-Merge-testing] L2/L3 UDN Pod2Egress is broken in SGW mode

XMLWordPrintable

    • No
    • Rejected
    • False
    • Hide

      None

      Show
      None
    • Release Note Not Required
    • In Progress

      Description of problem:

      L3 Egress traffic from pod in segmented network does not work.

      Version-Release number of selected component (if applicable):

      build openshift/ovn-kubernetes#2274,openshift/api#2005

      oc version

      Client Version: 4.15.9
      Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
      Server Version: 4.17.0-0.ci.test-2024-08-28-123437-ci-ln-v5g4wb2-latest
      Kubernetes Version: v1.30.3-dirty
       
      

       

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create a cluster UPI GCP with build from cluster bot

      2. Create a namespace test wih NAD as below

       oc -n test get network-attachment-definition l3-network-nad -oyaml

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        creationTimestamp: "2024-08-28T17:44:14Z"
        generation: 1
        name: l3-network-nad
        namespace: test
        resourceVersion: "108224"
        uid: 5db4ca26-39dd-45b7-8016-215664e21f5d
      spec:
        config: |
          {
            "cniVersion": "0.3.1",
            "name": "l3-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer3",
            "subnets": "10.150.0.0/16",
            "mtu": 1300,
            "netAttachDefName": "test/l3-network-nad",
            "role": "primary"
          }

      3. Create a pod in the segmented namespace test

      oc -n test exec -it hello-pod – ip a

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:83:00:11 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.131.0.17/23 brd 10.131.1.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe83:11/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:96:03:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.150.3.3/24 brd 10.150.3.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe96:303/64 scope link 
             valid_lft forever preferred_lft forever

       oc -n test exec -it hello-pod – ip r

      default via 10.150.3.1 dev ovn-udn1 
      10.128.0.0/14 via 10.131.0.1 dev eth0 
      10.131.0.0/23 dev eth0 proto kernel scope link src 10.131.0.17 
      10.150.0.0/16 via 10.150.3.1 dev ovn-udn1 
      10.150.3.0/24 dev ovn-udn1 proto kernel scope link src 10.150.3.3 
      100.64.0.0/16 via 10.131.0.1 dev eth0 
      100.65.0.0/16 via 10.150.3.1 dev ovn-udn1 
      172.30.0.0/16 via 10.150.3.1 dev ovn-udn1 

      4. Try to curl the IP echo server running outside the cluster to see it fail.

       oc -n test exec -it hello-pod – curl 10.0.0.2:9095 --connect-timeout 5

      curl: (28) Connection timeout after 5001 ms
      command terminated with exit code 28
       
      

      Actual results:

      curl request fails

      Expected results:

      curl request should pass

      Additional info:

      The egress from pod in regular namespace works 

       oc -n test1 exec -it hello-pod – curl 10.0.0.2:9095 --connect-timeout 5

      10.0.128.4

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sseethar Surya Seetharaman
              rhn-support-asood Arti Sood
              Arti Sood Arti Sood
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated: