Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-43713

[4.18 IPSEC] pod to pod communication is degraded

XMLWordPrintable

    • Important
    • None
    • 3
    • OSDOCS Sprint 262
    • 1
    • False
    • Hide

      None

      Show
      None
    • Hide
      Due to a regression in libreswan there is a possibility that nodes with IPSec enabled may experience loss of communication between pods on separate nodes. To avoid this condition, Red Hat recommends disabling IPSec in this release. Progress on this issue is being tracked in OCPBUGS-43713.
      Show
      Due to a regression in libreswan there is a possibility that nodes with IPSec enabled may experience loss of communication between pods on separate nodes. To avoid this condition, Red Hat recommends disabling IPSec in this release. Progress on this issue is being tracked in OCPBUGS-43713 .
    • Known Issue
    • In Progress
    • Customer Escalated
    • 09/10 Big support time sink. No PxE actions

      Description of problem:

      Bare Metal UPI cluster
      
      Nodes lose communication with other nodes and this affects the pod communication on these nodes as well. This issue can be fixed with an OVN rebuild on the nodes db that are hitting the issue but eventually the nodes will degrade again and lose communication again. Note despite an OVN Rebuild fixing the issue temporarily Host Networking is set to True so it's using the kernel routing table. 
      
      **update: observed on Vsphere with routingViaHost: false, ipForwarding: global configuration as well.

      Version-Release number of selected component (if applicable):

       4.14.7, 4.14.30

      How reproducible:

      Can't reproduce locally but reproducible and repeatedly occurring in customer environment 

      Steps to Reproduce:

      identify a host node who's pods can't be reached from other hosts in default namespaces ( tested via openshift-dns). observe curls to that peer pod consistently timeout. TCPdumps to target pod observe that packets are arriving and are acknowledged, but never route back to the client pod successfully. (SYN/ACK seen at pod network layer, not at geneve; so dropped before hitting geneve tunnel).

      Actual results:

      Nodes will repeatedly degrade and lose communication despite fixing the issue with a ovn db rebuild (db rebuild only provides hours/days of respite, no permanent resolve).

      Expected results:

      Nodes should not be losing communication and even if they did it should not happen repeatedly     

      Additional info:

      What's been tried so far
      ========================
      
      - Multiple OVN rebuilds on different nodes (works but node will eventually hit issue again)
      
      - Flushing the conntrack (Doesn't work)
      
      - Restarting nodes (doesn't work)
      
      Data gathered
      =============
      
      - Tcpdump from all interfaces for dns-pods going to port 7777 (to segregate traffic)
      
      - ovnkube-trace
      
      - SOSreports of two nodes having communication issues before an OVN rebuild
      
      - SOSreports of two nodes having communication issues after an OVN rebuild 
      
      - OVS trace dumps of br-int and br-ex 
      
      
      ====
      
      More data in nested comments below. 
      
      

      linking KCS: https://access.redhat.com/solutions/7091399 

              trozet@redhat.com Tim Rozet
              rhn-support-cruhm Courtney Ruhm
              Huiran Wang Huiran Wang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: