-
Bug
-
Resolution: Won't Do
-
Undefined
-
None
-
4.16, 4.16.z
-
None
-
Incidents & Support
-
False
-
-
None
-
Critical
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
A number of days after a migration across ESX hosts, intermittent (every 3rd - 5th connection) outbound connectivity issues started occurring from pods on multiple clusters sharing the same infra, however outbound connectivity from the nodes worked fine, and pod to pod/pod to service traffic worked fine. After gathering tcpdump captures to monitor traffic flow, we saw that failing traffic from the pod wasn't making it out of the node. In essence, we're seeing traffic leaving the node (regular NAT traffic) not be translated to the host network properly until after an OVN DB rebuild. An OVN DB rebuild fixed the issue temporarily, but it keeps coming back after a few days. The first time it occurred on the cluster being analyzed, it affected every node, the second time it came back , it was only affecting one node by the time we started troubleshooting.
Version-Release number of selected component (if applicable): 4.16.z
How reproducible: Customer specific.
Steps to Reproduce: n/a
Actual results: Every 3rd-5th outbound connection doesn't make it through the OVS flows.
Expected results: Every connection should make it through the OVS flows and outside the node.
Additional info:
From both the compute16 and compute17 ovndb, I see same behavior where logical flow(table=13(lr_in_ip_routing ), priority=69) on ovn_cluster_router is missing compute17: $ ovn-sbctl lflow-list ovn_cluster_router|grep -i 100.69.0.0/23 table=1 (lr_in_lookup_neighbor), priority=100 , match=(inport == "rtos-compute-17.ncw-az1-001.caas.bbtnet.com" && arp.spa == 100.69.0.0/23 && arp.op == 1 && is_chassis_resident("cr-rtos-compute-17.ncw-az1-001.caas.bbtnet.com")), action=(reg9[2] = lookup_arp(inport, arp.spa, arp.sha); next;) table=3 (lr_in_ip_input ), priority=90 , match=(inport == "rtos-compute-17.ncw-az1-001.caas.bbtnet.com" && arp.op == 1 && arp.tpa == 100.69.0.1 && arp.spa == 100.69.0.0/23), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;) table=13(lr_in_ip_routing ), priority=71 , match=(ip4.dst == 100.69.0.0/23), action=(ip.ttl--; reg8[0..15] = 0; reg0 = ip4.dst; reg1 = 100.69.0.1; eth.src = 0a:58:64:45:00:01; outport = "rtos-compute-17.ncw-az1-001.caas.bbtnet.com"; flags.loopback = 1; next;) [root@acc7d829efe2 ~]# compute16: root@83690ee03607 ~]# ovn-sbctl lflow-list ovn_cluster_router|grep -i 100.69.6.0/23 table=1 (lr_in_lookup_neighbor), priority=100 , match=(inport == "rtos-compute-16.ncw-az1-001.caas.bbtnet.com" && arp.spa == 100.69.6.0/23 && arp.op == 1 && is_chassis_resident("cr-rtos-compute-16.ncw-az1-001.caas.bbtnet.com")), action=(reg9[2] = lookup_arp(inport, arp.spa, arp.sha); next;) table=3 (lr_in_ip_input ), priority=90 , match=(inport == "rtos-compute-16.ncw-az1-001.caas.bbtnet.com" && arp.op == 1 && arp.tpa == 100.69.6.1 && arp.spa == 100.69.6.0/23), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;) table=13(lr_in_ip_routing ), priority=71 , match=(ip4.dst == 100.69.6.0/23), action=(ip.ttl--; reg8[0..15] = 0; reg0 = ip4.dst; reg1 = 100.69.6.1; eth.src = 0a:58:64:45:06:01; outport = "rtos-compute-16.ncw-az1-001.caas.bbtnet.com"; flags.loopback = 1; next;) From my test lab: sh-5.1# ovn-sbctl lflow-list ovn_cluster_router|grep -i 10.128.2.0/23 table=1 (lr_in_lookup_neighbor), priority=100 , match=(inport == "rtos-worker-0.testclustermroyovn2.lab.example.redhat.com" && arp.spa == 10.128.2.0/23 && arp.op == 1 && is_chassis_resident("cr-rtos-worker-0.testclustermroyovn2.lab.example.redhat.com")), action=(reg9[2] = lookup_arp(inport, arp.spa, arp.sha); next;) table=3 (lr_in_ip_input ), priority=90 , match=(inport == "rtos-worker-0.testclustermroyovn2.lab.example.redhat.com" && arp.op == 1 && arp.tpa == 10.128.2.1 && arp.spa == 10.128.2.0/23), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa <-> arp.spa; outport = inport; flags.loopback = 1; output;) table=13(lr_in_ip_routing ), priority=71 , match=(ip4.dst == 10.128.2.0/23), action=(ip.ttl--; reg8[0..15] = 0; reg0 = ip4.dst; reg1 = 10.128.2.1; eth.src = 0a:58:0a:80:02:01; outport = "rtos-worker-0.testclustermroyovn2.lab.example.redhat.com"; flags.loopback = 1; next;) table=13(lr_in_ip_routing ), priority=69 , match=(ip4.src == 10.128.2.0/23), action=(ip.ttl--; reg8[0..15] = 0; reg0 = 100.64.0.6; reg1 = 100.64.0.1; eth.src = 0a:58:64:40:00:01; outport = "rtoj-ovn_cluster_router"; flags.loopback = 1; next;) Since the issue is resolved for now, Could you please login to ovnkube-node pod running on the compute 16 and 17 and run the below command and provide output of same in the case. for compute16: $ ovn-sbctl lflow-list ovn_cluster_router|grep -i 100.69.6.0/23 $ ovn-nbctl lr-route-list ovn_cluster_router |grep -i 100.69.6.0/23 for compute17: $ ovn-sbctl lflow-list ovn_cluster_router|grep -i 100.69.0.0/23 $ ovn-nbctl lr-route-list ovn_cluster_router |grep -i 100.69.0.0/23 -- Regards, Manish Roy Red Hat
PCAPS indicate that we ARE failing to send traffic from eth0 on the client pod to ENS192 on the node consistently for this NAT call to the upstream service. Every time a SYN leaves from the host nic, we do reply with a syn/ack. So the issue is in routing from eth0 (pod) to ens192 (node/NAT). --> OVNKUBE DB issue is likely. Looking at the nat dump I do not see any duplicate/stale flows that pertain to this source pod IP: 100.69.0.44 - we've only got the one NAT entry and no other hosts have this IP logged. It may be prudent to open a bug with engineering to get their opinion - largely I am concerned about the fact that a DB rebuild WAS performed on all nodes (including compute-17) several days ago, and again the behavior has returned (now presenting on this host). Impact is somewhat limited (for now) but it's also not super clear that we're not going to see this behavior shift again to a new host. Engineering review could help with the NBDB and SBDB analysis from the network gather as well. A direct NAT to the host net should basically always succeed so I'm wondering if there's something else unexpected happening with the network config on the node (bug or otherwise). Engineering engagement might be warranted. ~WR
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components
- account is impacted by
-
OCPBUGS-57920 storage operator was unable to identify the VM UUIDs
-
- Closed
-