-
Bug
-
Resolution: Done-Errata
-
Major
-
None
-
4.18.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
No
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem: [BGP UDN EIP pre-merge testing] UDN pod that is not qualified to use egressIP should use its own UDN pod IP instead of its node IP as source IP in egressing packets
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1-4. same test setup steps as those in https://issues.redhat.com/browse/OCPBUGS-50964, please refer to OCPBUGS-50964 for details
after 1-4 steps, UDN pod network has been advertised correctly.
$ oc get egressips.k8s.ovn.org
NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS
egressip-99999 192.168.111.160 openshift-qe-025.lab.eng.rdu2.redhat.com 192.168.111.160
$ip route show | grep bgp
10.128.0.0/23 via 192.168.111.22 dev offloadbm proto bgp metric 20
10.128.2.0/23 via 192.168.111.24 dev offloadbm proto bgp metric 20
10.129.0.0/23 via 192.168.111.20 dev offloadbm proto bgp metric 20
10.129.2.0/23 via 192.168.111.23 dev offloadbm proto bgp metric 20
10.130.0.0/23 via 192.168.111.21 dev offloadbm proto bgp metric 20
10.130.2.0/23 via 192.168.111.47 dev offloadbm proto bgp metric 20
10.131.0.0/23 via 192.168.111.25 dev offloadbm proto bgp metric 20
10.131.2.0/23 via 192.168.111.40 dev offloadbm proto bgp metric 20
10.150.0.0/24 via 192.168.111.23 dev offloadbm proto bgp metric 20
10.150.1.0/24 via 192.168.111.24 dev offloadbm proto bgp metric 20
10.150.2.0/24 via 192.168.111.22 dev offloadbm proto bgp metric 20
10.150.3.0/24 via 192.168.111.47 dev offloadbm proto bgp metric 20
10.150.4.0/24 via 192.168.111.20 dev offloadbm proto bgp metric 20
10.150.5.0/24 via 192.168.111.25 dev offloadbm proto bgp metric 20
10.150.6.0/24 via 192.168.111.40 dev offloadbm proto bgp metric 20
10.150.7.0/24 via 192.168.111.21 dev offloadbm proto bgp metric 20
5. Created 3 test pods in the UDN namespace, pod1 was created on egress node, pod2 was created on a non-egress node, labelled pod1 and pod2 with a label that matches podSelector of egressIP object, I call pod1 as local EIP pod, pod2 as remote EIP pod. Pod3 was created on egress node, but do not label it use the egressIP
- oc -n e2e-test-udn-networking-105ei4nw-85xtt get pod -owide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
hello-pod-non-eip-e2e-test-udn-networking-105ei4nw-85xtt 1/1 Running 0 7s 10.131.2.25 openshift-qe-025.lab.eng.rdu2.redhat.com <none> <none> name=hello-pod
hello-pod0-e2e-test-udn-networking-105ei4nw-85xtt 1/1 Running 0 93s 10.131.2.24 openshift-qe-025.lab.eng.rdu2.redhat.com <none> <none> color=pink,name=hello-pod
hello-pod1-e2e-test-udn-networking-105ei4nw-85xtt 1/1 Running 0 74s 10.130.2.19 openshift-qe-029.lab.eng.rdu2.redhat.com <none> <none> color=pink,name=hello-pod
3. curl www.google.from from pod3 which is a pod that is not qualified to use egressIP, as it did not have matching label to podSelector of egressIP object, capture tcpdump on the egress node
$ oc debug node/openshift-qe-025.lab.eng.rdu2.redhat.com
Starting pod/openshift-qe-025labengrdu2redhatcom-debug-29gjh ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.111.40
If you don't see a command prompt, try pressing enter.
sh-5.1# chroot /host
sh-5.1# nmcli con show
NAME UUID TYPE DEVICE
ovs-if-br-ex eaab14ff-e50b-4972-b6be-937416952c28 ovs-interface br-ex
Wired Connection 56844bad-bc00-479d-81be-781286e23d86 ethernet eno1
Wired Connection 56844bad-bc00-479d-81be-781286e23d86 ethernet ens1f1np1
br-ex d49bb115-3a0b-48a9-8c01-248c7b20d165 ovs-bridge br-ex
ovs-if-phys0 834e002c-327b-47dc-8280-1fd28e9b4576 ethernet ens3f0np0
ovs-port-br-ex faf6116f-85df-4be5-a1fe-7e2ccec45c05 ovs-port br-ex
ovs-port-phys0 5fad23ea-9b8a-40e8-98c1-5e9d790a7ee1 ovs-port ens3f0np0
Wired Connection abfcbd5b-c229-498c-a234-716d74d4a4a7 ethernet ens1f0np0
Wired Connection abfcbd5b-c229-498c-a234-716d74d4a4a7 ethernet ens2f1
Wired Connection abfcbd5b-c229-498c-a234-716d74d4a4a7 ethernet ens3f1np1
Wired Connection abfcbd5b-c229-498c-a234-716d74d4a4a7 ethernet ens7f0np0
lo caf23041-4afa-493a-a6ec-35be87af9284 loopback lo
mp6-udn-vrf 4bda0ce7-60c7-452c-a65b-c4b91a7ef62f vrf mp6-udn-vrf
sh-5.1# exit
exit
sh-5.1# timeout 60s tcpdump -c 4 -nni ens3f0np0 host www.google.com
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens3f0np0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
22:26:25.750947 IP 192.168.111.40.47020 > 142.251.32.68.80: Flags [S], seq 3709996138, win 65280, options [mss 1360,sackOK,TS val 1693557354 ecr 0,nop,wscale 7], length 0
22:26:25.778695 IP 142.251.32.68.80 > 192.168.111.40.47020: Flags [S.], seq 2709470209, ack 3709996139, win 65535, options [mss 1380,sackOK,TS val 1591014592 ecr 1693557354,nop,wscale 8], length 0
22:26:25.844272 IP 192.168.111.40.47020 > 142.251.32.68.80: Flags [P.], seq 1:79, ack 1, win 510, options [nop,nop,TS val 1693557448 ecr 1591014592], length 78: HTTP: GET / HTTP/1.1
22:26:25.844292 IP 192.168.111.40.47020 > 142.251.32.68.80: Flags [.], ack 1, win 510, options [nop,nop,TS val 1693557448 ecr 1591014592], length 0
4 packets captured
10 packets received by filter
0 packets dropped by kernel
timed out waiting for input: auto-logout
$ oc get node openshift-qe-025.lab.eng.rdu2.redhat.com -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
openshift-qe-025.lab.eng.rdu2.redhat.com Ready sriov,worker 3h51m v1.32.1 192.168.111.40 <none> Red Hat Enterprise Linux CoreOS 419.96.202502140538-0 5.14.0-568.el9.x86_64 cri-o://1.32.1-2.rhaos4.19.git217bc2f.el9
$ oc -n e2e-test-udn-networking-105ei4nw-85xtt rsh hello-pod-non-eip-e2e-test-udn-networking-105ei4nw-85xtt
~ $ ip a show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
link/ether 0a:58:0a:83:02:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.131.2.25/23 brd 10.131.3.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::858:aff:fe83:219/64 scope link
valid_lft forever preferred_lft forever
3: ovn-udn1@if62: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
link/ether 0a:58:0a:96:06:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.150.6.7/24 brd 10.150.6.255 scope global ovn-udn1
valid_lft forever preferred_lft forever
inet6 fe80::858:aff:fe96:607/64 scope link
valid_lft forever preferred_lft forever
~ $ exit
Actual results: pod3's node IP was used as sourceIP
Expected results: pod3's UDN pod IP should be used as sourceIP, there should not be SNAT any more
Additional info:
must-gather: https://drive.google.com/file/d/1Irup08oHaH1XQLnK4PBetipHXEBMEtBN/view?usp=drive_link
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn't need to read the entire case history.
- Don't presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with "sbr-triaged"
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with "sbr-untriaged"
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label "SDN-Jira-template"
- For guidance on using this template please see
OCPBUGS Template Training for Networking components
- links to
-
RHBA-2025:3775 OpenShift Container Platform 4.18.z bug fix update