Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-52194

[OVNK BGP pre-merge testing] on dualstack cluster, some v4 or v6 BGP routes would disappear from external frr container

      Description of problem: [OVNK BGP pre-merge testing] on dualstack cluster, some v4 or v6 BGP routes would disappear from external frr container

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. installed dualstack cluster, patched CNO to enable frr as additionalRoutingCapabilities and enable routeAdvertisement

      2.created external frr container in iBGP mode, applied receive_all.yaml and ra.yaml, waited till default network was fully advertised

      3. Ran networking feature regression tests  on the BGP enabled dualstack cluster

      4. Over the time, noticed some v4 or v6 routes silently disappeared, some disappeared route might come back on its own, others never came back

      Actual results: v4 or v6 routes would siliently disappeared while all nodes are in Ready state , some but not all of them would recover

      Expected results: v4 or v6 routes should not disappear, if a node is rebooted and becomes ready again, routes should fully recover within reasonable time

      Additional info:

       

      <Before starting regression test>

      [root@openshift-qe-026 jechen]# date; ip -6 route show | grep bgp
      Sun Mar  2 18:16:18 EST 2025
      fd01:0:0:1::/64 via fd2e:6f44:5dd8:c956::14 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:2::/64 via fd2e:6f44:5dd8:c956::16 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:3::/64 via fd2e:6f44:5dd8:c956::15 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:4::/64 via fd2e:6f44:5dd8:c956::19 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:5::/64 via fd2e:6f44:5dd8:c956::17 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:6::/64 via fd2e:6f44:5dd8:c956::18 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:7::/64 via fd2e:6f44:5dd8:c956::20 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:8::/64 via fd2e:6f44:5dd8:c956::26 dev offloadbm proto bgp metric 20 pref medium

      #ip route show | grep bgp
      10.128.0.0/23 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.128.2.0/23 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      10.129.0.0/23 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      10.129.2.0/23 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.130.0.0/23 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.130.2.0/23 via 192.168.111.40 dev offloadbm proto bgp metric 20 
      10.131.0.0/23 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.131.2.0/23 via 192.168.111.47 dev offloadbm proto bgp metric 20 

       

      <2 some hours after regression test>

      #date; ip -6 route show | grep bgp
      Sun Mar  2 20:07:20 EST 2025
      fd01:0:0:1::/64 via fd2e:6f44:5dd8:c956::14 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:3::/64 via fd2e:6f44:5dd8:c956::15 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:5::/64 via fd2e:6f44:5dd8:c956::17 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:6::/64 via fd2e:6f44:5dd8:c956::18 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:7::/64 via fd2e:6f44:5dd8:c956::20 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:8::/64 via fd2e:6f44:5dd8:c956::26 dev offloadbm proto bgp metric 20 pref medium

       

      all the nodes are still in Ready state:

      #oc get node
      NAME                                       STATUS   ROLES                  AGE     VERSION
      master-0.offload.openshift-qe.sdn.com      Ready    control-plane,master   3h54m   v1.32.1
      master-1.offload.openshift-qe.sdn.com      Ready    control-plane,master   3h55m   v1.32.1
      master-2.offload.openshift-qe.sdn.com      Ready    control-plane,master   3h54m   v1.32.1
      openshift-qe-025.lab.eng.rdu2.redhat.com   Ready    sriov,worker           176m    v1.32.1
      openshift-qe-029.lab.eng.rdu2.redhat.com   Ready    sriov,worker           175m    v1.32.1
      worker-0.offload.openshift-qe.sdn.com      Ready    worker                 3h34m   v1.32.1
      worker-1.offload.openshift-qe.sdn.com      Ready    worker                 3h34m   v1.32.1
      worker-2.offload.openshift-qe.sdn.com      Ready    worker                 3h34m   v1.32.1

       

      #oc get FRRNodeState -owide
      NAME                                       AGE
      master-0.offload.openshift-qe.sdn.com      151m
      master-1.offload.openshift-qe.sdn.com      151m
      master-2.offload.openshift-qe.sdn.com      151m
      openshift-qe-025.lab.eng.rdu2.redhat.com   151m
      openshift-qe-029.lab.eng.rdu2.redhat.com   151m
      worker-0.offload.openshift-qe.sdn.com      151m
      worker-1.offload.openshift-qe.sdn.com      151m
      worker-2.offload.openshift-qe.sdn.com      151m

       

      #oc get nodes -o jsonpath='

      {range.items[*]} {.metadata.name}

      {"\t"} {.metadata.annotations.k8s\.ovn\.org/node-subnets} {"\n"}'
       master-0.offload.openshift-qe.sdn.com      

      {"default":["10.128.0.0/23","fd01:0:0:1::/64"]}

       
       master-1.offload.openshift-qe.sdn.com      

      {"default":["10.130.0.0/23","fd01:0:0:3::/64"]}

       
       master-2.offload.openshift-qe.sdn.com      

      {"default":["10.129.0.0/23","fd01:0:0:2::/64"]}

       
       openshift-qe-025.lab.eng.rdu2.redhat.com      

      {"default":["10.130.2.0/23","fd01:0:0:7::/64"]}

       
       openshift-qe-029.lab.eng.rdu2.redhat.com      

      {"default":["10.131.2.0/23","fd01:0:0:8::/64"]}

       
       worker-0.offload.openshift-qe.sdn.com      

      {"default":["10.128.2.0/23","fd01:0:0:5::/64"]}

       
       worker-1.offload.openshift-qe.sdn.com      

      {"default":["10.129.2.0/23","fd01:0:0:6::/64"]}

       
       worker-2.offload.openshift-qe.sdn.com      

      {"default":["10.131.0.0/23","fd01:0:0:4::/64"]}

       

       

      #date; ip route show | grep bgp
      Sun Mar  2 20:28:09 EST 2025
      10.128.0.0/23 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.129.0.0/23 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      10.129.2.0/23 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.130.0.0/23 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.130.2.0/23 via 192.168.111.40 dev offloadbm proto bgp metric 20 
      10.131.0.0/23 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.131.2.0/23 via 192.168.111.47 dev offloadbm proto bgp metric 20 

      #date; ip -6 route show | grep bgp
      Sun Mar  2 20:28:26 EST 2025
      fd01:0:0:1::/64 via fd2e:6f44:5dd8:c956::14 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:3::/64 via fd2e:6f44:5dd8:c956::15 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:6::/64 via fd2e:6f44:5dd8:c956::18 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:7::/64 via fd2e:6f44:5dd8:c956::20 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:8::/64 via fd2e:6f44:5dd8:c956::26 dev offloadbm proto bgp metric 20 pref medium

       

      <roughly 45 minutes after regression test> 

      #date; ip route show | grep bgp
      Sun Mar  2 20:53:17 EST 2025
      10.128.0.0/23 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.128.2.0/23 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      10.129.0.0/23 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      10.129.2.0/23 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.130.0.0/23 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.130.2.0/23 via 192.168.111.40 dev offloadbm proto bgp metric 20 
      10.131.0.0/23 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.131.2.0/23 via 192.168.111.47 dev offloadbm proto bgp metric 20

      #date; ip -6 route show | grep bgp
      Sun Mar  2 20:53:27 EST 2025
      fd01:0:0:1::/64 via fd2e:6f44:5dd8:c956::14 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:3::/64 via fd2e:6f44:5dd8:c956::15 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:5::/64 via fd2e:6f44:5dd8:c956::17 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:6::/64 via fd2e:6f44:5dd8:c956::18 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:7::/64 via fd2e:6f44:5dd8:c956::20 dev offloadbm proto bgp metric 20 pref medium
      fd01:0:0:8::/64 via fd2e:6f44:5dd8:c956::26 dev offloadbm proto bgp metric 20 pref medium

       

      45 minutes after regression test, v6 routes have still not been recovered

       

      must-gather: https://drive.google.com/file/d/1VCwhyjnUec5Ug6XkvtI5QNsi8fPTLQK5/view?usp=drive_link

       

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              jcaamano@redhat.com Jaime Caamaño Ruiz
              jechen@redhat.com Jean Chen
              Jean Chen Jean Chen
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: