-
Bug
-
Resolution: Done-Errata
-
Critical
-
4.16
-
Important
-
Yes
-
SDN Sprint 254, SDN Sprint 255
-
2
-
Approved
-
False
-
-
OVN multicast was broken in setups where a sender and a receiver were on the same node.
-
Bug Fix
-
In Progress
Description of problem:
Multicast packets got 100% dropped
Version-Release number of selected component (if applicable):
4.16.0-0.nightly-2024-06-02-202327
How reproducible:
Always
Steps to Reproduce:
1. Create a test namespace and enable multicast
oc describe ns test
Name: test
Labels: kubernetes.io/metadata.name=test
pod-security.kubernetes.io/audit=restricted
pod-security.kubernetes.io/audit-version=v1.24
pod-security.kubernetes.io/enforce=restricted
pod-security.kubernetes.io/enforce-version=v1.24
pod-security.kubernetes.io/warn=restricted
pod-security.kubernetes.io/warn-version=v1.24
Annotations: k8s.ovn.org/multicast-enabled: true
openshift.io/sa.scc.mcs: s0:c28,c27
openshift.io/sa.scc.supplemental-groups: 1000810000/10000
openshift.io/sa.scc.uid-range: 1000810000/10000
Status: Active
No resource quota.
No LimitRange resource.
2. Created multicast pods
% oc get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mcast-rc-67897 1/1 Running 0 10s 10.129.2.42 ip-10-0-86-58.us-east-2.compute.internal <none> <none> mcast-rc-ftsq8 1/1 Running 0 10s 10.128.2.61 ip-10-0-33-247.us-east-2.compute.internal <none> <none> mcast-rc-q48db 1/1 Running 0 10s 10.131.0.27 ip-10-0-1-176.us-east-2.compute.internal <none> <none>
3. Test mulicast traffic with omping from two pods
% oc rsh -n test mcast-rc-67897 ~ $ ~ $ omping -c10 10.129.2.42 10.128.2.61 10.128.2.61 : waiting for response msg 10.128.2.61 : joined (S,G) = (*, 232.43.211.234), pinging 10.128.2.61 : unicast, seq=1, size=69 bytes, dist=2, time=0.506ms 10.128.2.61 : unicast, seq=2, size=69 bytes, dist=2, time=0.595ms 10.128.2.61 : unicast, seq=3, size=69 bytes, dist=2, time=0.555ms 10.128.2.61 : unicast, seq=4, size=69 bytes, dist=2, time=0.572ms 10.128.2.61 : unicast, seq=5, size=69 bytes, dist=2, time=0.614ms 10.128.2.61 : unicast, seq=6, size=69 bytes, dist=2, time=0.653ms 10.128.2.61 : unicast, seq=7, size=69 bytes, dist=2, time=0.611ms 10.128.2.61 : unicast, seq=8, size=69 bytes, dist=2, time=0.594ms 10.128.2.61 : unicast, seq=9, size=69 bytes, dist=2, time=0.603ms 10.128.2.61 : unicast, seq=10, size=69 bytes, dist=2, time=0.687ms 10.128.2.61 : given amount of query messages was sent 10.128.2.61 : unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 0.506/0.599/0.687/0.050 10.128.2.61 : multicast, xmt/rcv/%loss = 10/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000 % oc rsh -n test mcast-rc-ftsq8 ~ $ omping -c10 10.128.2.61 10.129.2.42 10.129.2.42 : waiting for response msg 10.129.2.42 : waiting for response msg 10.129.2.42 : waiting for response msg 10.129.2.42 : waiting for response msg 10.129.2.42 : joined (S,G) = (*, 232.43.211.234), pinging 10.129.2.42 : unicast, seq=1, size=69 bytes, dist=2, time=0.463ms 10.129.2.42 : unicast, seq=2, size=69 bytes, dist=2, time=0.578ms 10.129.2.42 : unicast, seq=3, size=69 bytes, dist=2, time=0.632ms 10.129.2.42 : unicast, seq=4, size=69 bytes, dist=2, time=0.652ms 10.129.2.42 : unicast, seq=5, size=69 bytes, dist=2, time=0.635ms 10.129.2.42 : unicast, seq=6, size=69 bytes, dist=2, time=0.626ms 10.129.2.42 : unicast, seq=7, size=69 bytes, dist=2, time=0.597ms 10.129.2.42 : unicast, seq=8, size=69 bytes, dist=2, time=0.618ms 10.129.2.42 : unicast, seq=9, size=69 bytes, dist=2, time=0.964ms 10.129.2.42 : unicast, seq=10, size=69 bytes, dist=2, time=0.619ms 10.129.2.42 : given amount of query messages was sent 10.129.2.42 : unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 0.463/0.638/0.964/0.126 10.129.2.42 : multicast, xmt/rcv/%loss = 10/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
Actual results:
Mulicast packets loss is 100%
10.129.2.42 : multicast, xmt/rcv/%loss = 10/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
Expected results:
Should no 100% packet loss.
Additional info:
No such issue in 4.15, tested on same profile ipi-on-aws/versioned-installer-ci with 4.15.0-0.nightly-2024-05-31-131420, same operation with above steps.
The output for both mulicast pods:
10.131.0.27 : unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 1.176/1.239/1.269/0.027 10.131.0.27 : multicast, xmt/rcv/%loss = 10/9/9% (seq>=2 0%), min/avg/max/std-dev = 1.227/1.304/1.755/0.170 and 10.129.2.16 : unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 1.101/1.264/1.321/0.065 10.129.2.16 : multicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 1.230/1.351/1.890/0.191
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- clones
-
OCPBUGS-34778 [4.16] Multicast packets got 100% loss
- Closed
- depends on
-
OCPBUGS-34778 [4.16] Multicast packets got 100% loss
- Closed
- links to
-
RHSA-2024:0041 OpenShift Container Platform 4.16.0 bug fix and security update