Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60138

[BGP pre-merge testing] L3 CUDN BGP egressIP is broken on SGW mode cluster, podIP instead of egressIP address was used as sourceIP in egressing packets

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • Yes
    • None
    • Rejected
    • CORENET Sprint 275
    • 1
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem: [BGP pre-merge testing] L3 CUDN BGP egressIP is broken on SGW mode cluster, podIP instead of egressIP address was used as sourceIP in egressing packet

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. create an UDN namespace, label it to match namespaceSelector of L3 CUDN to be created in step 2, also label it to match namespaceSelector of egressIP object to be created in step 4

      oc get ns e2e-test-udn-networking-munyl8w9-rbp2f --show-labels | grep org
      e2e-test-udn-networking-munyl8w9-rbp2f   Active   39s   cudn-bgp=cudn-network-e3iljjgc,k8s.ovn.org/primary-user-defined-network=null,kubernetes.io/metadata.name=e2e-test-udn-networking-munyl8w9-rbp2f,org=qe,pod-security.kubernetes.io/audit-version=latest,pod-security.kubernetes.io/audit=restricted,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=restricted,pod-security.kubernetes.io/warn-version=latest,pod-security.kubernetes.io/warn=restricted

      $ oc get ns e2e-test-udn-networking-munyl8w9-rbp2f --show-labels | grep org
      e2e-test-udn-networking-munyl8w9-rbp2f   Active   39s   cudn-bgp=cudn-network-e3iljjgc,k8s.ovn.org/primary-user-defined-network=null,kubernetes.io/metadata.name=e2e-test-udn-networking-munyl8w9-rbp2f,org=qe,pod-security.kubernetes.io/audit-version=latest,pod-security.kubernetes.io/audit=restricted,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=restricted,pod-security.kubernetes.io/warn-version=latest,pod-security.kubernetes.io/warn=restricted

       

       

      2. create a L3 CUDN, label it to match networkSelector of CUDN RA that is created in step 3

      $oc get clusteruserdefinednetwork cudn-network-79715 --show-labels
      NAME                 AGE   LABELS
      cudn-network-79715   68s   app=udn

       

      $oc get clusteruserdefinednetwork cudn-network-79715 -oyaml
      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        annotations:
          kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"k8s.ovn.org/v1","kind":"ClusterUserDefinedNetwork","metadata":{"annotations":{},"name":"cudn-network-79715"},"spec":{"namespaceSelector":{"matchLabels":{"cudn-bgp":"cudn-network-e3iljjgc"}},"network":{"layer3":{"role":"Primary","subnets":[

      {"cidr":"10.150.0.0/16","hostSubnet":24}

      ]},"topology":"Layer3"}}}
        creationTimestamp: "2025-08-05T12:45:22Z"
        finalizers:
        - k8s.ovn.org/user-defined-network-protection
        generation: 1
        labels:
          app: udn
        name: cudn-network-79715
        resourceVersion: "409524"
        uid: 8b8680a9-e350-42de-bdb1-af55969fe30c
      spec:
        namespaceSelector:
          matchLabels:
            cudn-bgp: cudn-network-e3iljjgc
        network:
          layer3:
            role: Primary
            subnets:
            - cidr: 10.150.0.0/16
              hostSubnet: 24
          topology: Layer3
      status:
        conditions:
        - lastTransitionTime: "2025-08-05T12:45:22Z"
          message: 'NetworkAttachmentDefinition has been created in following namespaces:
            [e2e-test-udn-networking-munyl8w9-rbp2f]'
          reason: NetworkAttachmentDefinitionCreated
          status: "True"
          type: NetworkCreated

       

      3. Apply CUDN RA, verify it is in Accepted state

      $ oc get ra ra-cudn -oyaml
      apiVersion: k8s.ovn.org/v1
      kind: RouteAdvertisements
      metadata:
        annotations:
          kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"k8s.ovn.org/v1","kind":"RouteAdvertisements","metadata":{"annotations":{},"name":"ra-cudn"},"spec":{"advertisements":["EgressIP"],"frrConfigurationSelector":{},"networkSelectors":[{"clusterUserDefinedNetworkSelector":{"networkSelector":{"matchLabels":

      {"app":"udn"}

      }},"networkSelectionType":"ClusterUserDefinedNetworks"}],"nodeSelector":{}}}
        creationTimestamp: "2025-08-05T13:01:47Z"
        generation: 2
        name: ra-cudn
        resourceVersion: "414350"
        uid: ceb5a92f-124e-48c7-ad0d-a493979db3c1
      spec:
        advertisements:
        - EgressIP
        - PodNetwork
        frrConfigurationSelector: {}
        networkSelectors:
        - clusterUserDefinedNetworkSelector:
            networkSelector:
              matchLabels:
                app: udn
          networkSelectionType: ClusterUserDefinedNetworks
        nodeSelector: {}
      status:
        conditions:
        - lastTransitionTime: "2025-08-05T13:02:20Z"
          message: ovn-kubernetes cluster-manager validated the resource and requested the
            necessary configuration changes
          observedGeneration: 2
          reason: Accepted
          status: "True"
          type: Accepted
        status: Accepted

      4. Label a node to be egress-assignable=true, create egressIP object, verify egressIP is assigned to egress node

      $oc get egressip egressip-79715 -oyaml
      apiVersion: k8s.ovn.org/v1
      kind: EgressIP
      metadata:
        annotations:
          k8s.ovn.org/egressip-mark: "50001"
        creationTimestamp: "2025-08-05T12:46:03Z"
        generation: 2
        name: egressip-79715
        resourceVersion: "409733"
        uid: 8e99f4d8-441a-44d8-9713-29c5f36173f0
      spec:
        egressIPs:
        - 192.168.111.96
        namespaceSelector:
          matchLabels:
            org: qe
        podSelector:
          matchLabels:
            color: pink
      status:
        items:
        - egressIP: 192.168.111.96
          node: worker-0

      5. Create two test pods in the UDN namespace, label them to match podSelector of egressIP object created in step 4,   one test pod is on the egress node (local EIP pod), the 2nd test pod is on another cluster node (remote EIP pod)

      $oc -n e2e-test-udn-networking-pbmfrx36-6nhj2 get pod --show-labels
      NAME                                                    READY   STATUS    RESTARTS   AGE   LABELS
      hello-pod0-eip-e2e-test-udn-networking-pbmfrx36-6nhj2   1/1     Running   0          26s   color=pink,name=hello-pod
      hello-pod1-eip-e2e-test-udn-networking-pbmfrx36-6nhj2   1/1     Running   0          14s   color=pink,name=hello-pod

       

      $oc -n e2e-test-udn-networking-pbmfrx36-6nhj2 get pod -owide
      NAME                                                    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
      hello-pod0-eip-e2e-test-udn-networking-pbmfrx36-6nhj2   1/1     Running   0          33s   10.131.0.8   worker-0   <none>           <none>
      hello-pod1-eip-e2e-test-udn-networking-pbmfrx36-6nhj2   1/1     Running   0          21s   10.129.2.9   worker-2   <none>           <none>

       

      6. Curl external agnhost service from local EIP pod and remote EIP pod, tcpdump on egress node

       

      $oc debug node/worker-0
      Starting pod/worker-0-debug-s588j ...
      To use host binaries, run `chroot /host`. Instead, if you need to access host namespaces, run `nsenter -a -t 1`.
      Pod IP: 192.168.111.23
      If you don't see a command prompt, try pressing enter.
      sh-5.1# timeout 60s tcpdump -c 2 -nni enp2s0 host 172.20.0.100
      dropped privs to tcpdump
      tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
      listening on enp2s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
      12:48:18.381578 IP 10.150.0.5.40328 > 172.20.0.100.8000: Flags [S], seq 876036722, win 65280, options [mss 1360,sackOK,TS val 2370187681 ecr 0,nop,wscale 7], length 0
      12:48:18.381669 IP 172.20.0.100.8000 > 10.150.0.5.40328: Flags [S.], seq 474143134, ack 876036723, win 65160, options [mss 1460,sackOK,TS val 942583440 ecr 2370187681,nop,wscale 7], length 0
      2 packets captured
      10 packets received by filter
      0 packets dropped by kernel
      sh-5.1#     
      sh-5.1# 
      sh-5.1# 
      sh-5.1# timeout 60s tcpdump -c 2 -nni enp2s0 host 172.20.0.100
      dropped privs to tcpdump
      tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
      listening on enp2s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
      12:48:50.422951 IP 10.150.1.4.44448 > 172.20.0.100.8000: Flags [S], seq 536662830, win 65280, options [mss 1360,sackOK,TS val 686186285 ecr 0,nop,wscale 7], length 0
      12:48:50.424274 IP 10.150.1.4.44448 > 172.20.0.100.8000: Flags [.], ack 777484009, win 510, options [nop,nop,TS val 686186288 ecr 3102840977], length 0
      2 packets captured
      6 packets received by filter
      0 packets dropped by kernel
      sh-5.1# exit
      exit

      Actual results:  UDN podIP for local and remote EIP pods are used as sourceIP in captured tcpdump packets

      Expected results:  egressIP should be used as sourceIP

      Additional info:

      1) used pre-merge image built from openshift/cluster-network-operator#2752,openshift/ovn-kubernetes#2694 

      2) without CUDN RA, regular UDN egressIP still works

      3) same test passed for LGW mode, failure only happens with SGW mode 

      4) same test passed for 4.20.0-0.nightly-2025-07-31-063120

      5) same test passed for pre-merge image built on July 25 with openshift/ovn-kubernetes#2651,openshift/ovn-kubernetes#2569,openshift/cluster-network-operator#2714,openshift/ovn-kubernetes#2656 

       

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn't need to read the entire case history.
      • Don't presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with "sbr-triaged"
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with "sbr-untriaged"
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label "SDN-Jira-template"
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              pepalani@redhat.com Periyasamy Palanisamy
              jechen@redhat.com Jean Chen
              None
              Peng Liu
              Jean Chen Jean Chen
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: