-
Bug
-
Resolution: Unresolved
-
Critical
-
4.14.z, 4.15.z, 4.17.z, 4.16.z
-
Important
-
None
-
False
-
-
Known Issue
-
In Progress
-
Customer Escalated
-
Major Incident
-
-
-
09/10 Big support time sink. No PxE actions
-
Description of problem:
Bare Metal UPI cluster Nodes lose communication with other nodes and this affects the pod communication on these nodes as well. This issue can be fixed with an OVN rebuild on the nodes db that are hitting the issue but eventually the nodes will degrade again and lose communication again. Note despite an OVN Rebuild fixing the issue temporarily Host Networking is set to True so it's using the kernel routing table. **update: observed on Vsphere with routingViaHost: false, ipForwarding: global configuration as well.
Version-Release number of selected component (if applicable):
4.14.7, 4.14.30
How reproducible:
Can't reproduce locally but reproducible and repeatedly occurring in customer environment
Steps to Reproduce:
identify a host node who's pods can't be reached from other hosts in default namespaces ( tested via openshift-dns). observe curls to that peer pod consistently timeout. TCPdumps to target pod observe that packets are arriving and are acknowledged, but never route back to the client pod successfully. (SYN/ACK seen at pod network layer, not at geneve; so dropped before hitting geneve tunnel).
Actual results:
Nodes will repeatedly degrade and lose communication despite fixing the issue with a ovn db rebuild (db rebuild only provides hours/days of respite, no permanent resolve).
Expected results:
Nodes should not be losing communication and even if they did it should not happen repeatedly
Additional info:
What's been tried so far ======================== - Multiple OVN rebuilds on different nodes (works but node will eventually hit issue again) - Flushing the conntrack (Doesn't work) - Restarting nodes (doesn't work) Data gathered ============= - Tcpdump from all interfaces for dns-pods going to port 7777 (to segregate traffic) - ovnkube-trace - SOSreports of two nodes having communication issues before an OVN rebuild - SOSreports of two nodes having communication issues after an OVN rebuild - OVS trace dumps of br-int and br-ex ==== More data in nested comments below.
linking KCS: https://access.redhat.com/solutions/7091399
- clones
-
OCPBUGS-43715 [4.16 IPSEC] pod to pod communication is degraded
- POST
- is depended on by
-
OCPBUGS-42952 [4.14 IPSEC] pod to pod communication is degraded
- New
- is documented by
-
OCPBUGS-44672 Add pod to pod communication is degraded note to RN docs
- POST