Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-53150

[BGP UDN EIP] On UDN, after un-label one of the two egress nodes which previously selected by EIP nodeSelector, the previously advertised egressIP of the un-selected egressNode was not de-advertised

XMLWordPrintable

    • None
    • Proposed
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem: [BGP UDN EIP] On UDN, after un-label one of the two egress nodes which previously selected by EIP nodeSelector, the previously advertised egressIP of the un-selected egressNode was not de-advertised

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. create UDN namespace, create L3 UDN in the namespace, label the namespace to match egressIP's namespaceSelector

      2. Apply a RA for UDN with networkSelector selecting the UDN created in step 1, and wait till the RA is  accepted, wait till UDN network is advertised

      3. Label two egressIP nodes (node A and node B), create two egressIP objects, wait till they be assigned to one egress node, only label nodeA with a label matching the nodeSelector for egressIP RA that will be created in step 4

      #oc get egressip
      NAME               EGRESSIPS        ASSIGNED NODE   ASSIGNED EGRESSIPS
      egressip-79767-0   192.168.111.51   worker-1        192.168.111.51
      egressip-79767-1   192.168.111.8    worker-0        192.168.111.8
      [root@sdn-09 jechen]# oc get node --show-labels | grep egress
      worker-0   Ready    worker                 7h6m    v1.32.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,k8s.ovn.org/egress-assignable=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node=A
      worker-1   Ready    worker                 7h7m    v1.32.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,k8s.ovn.org/egress-assignable=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos

      #oc get node --show-labels | grep node=A
      worker-0   Ready    worker                 7h7m    v1.32.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,k8s.ovn.org/egress-assignable=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node=A

      4. Create a second RA with nodeSelector on egressIP advertisement, wait till it is accepted

      #oc get ra -A
      NAME                  STATUS
      ra-eip-nodeselector   Accepted
      ra-udn                Accepted

      5.  After being successfully able to verify egressIP1 of egress node A is advertised while egressIP2 of egress node B was not advertised,

      #date;ip route show | grep bgp
      Fri Mar 14 05:44:32 PM EDT 2025
      10.150.0.0/24 nhid 18483 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.150.1.0/24 nhid 18480 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      10.150.2.0/24 nhid 18475 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.150.3.0/24 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.150.4.0/24 nhid 18485 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.150.5.0/24 nhid 18477 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      192.168.111.237 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20 

       

      6. label egress node B with the label matching nodeSelector of egressIP RA, was able to see egressIP2 is also advertised

      #date;ip route show | grep bgp
      Fri Mar 14 05:44:45 PM EDT 2025
      10.150.0.0/24 nhid 18483 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.150.1.0/24 nhid 18480 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      10.150.2.0/24 nhid 18475 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.150.3.0/24 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.150.4.0/24 nhid 18485 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.150.5.0/24 nhid 18477 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      192.168.111.64 nhid 18480 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      192.168.111.237 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20

       

      7. Unlabel egress node A, so it is not being selected by EIP nodeSelector any more

      #date;ip route show | grep bgp
      Fri Mar 14 05:56:04 PM EDT 2025
      10.150.0.0/24 nhid 18483 via 192.168.111.25 dev offloadbm proto bgp metric 20 
      10.150.1.0/24 nhid 18480 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      10.150.2.0/24 nhid 18475 via 192.168.111.21 dev offloadbm proto bgp metric 20 
      10.150.3.0/24 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20 
      10.150.4.0/24 nhid 18485 via 192.168.111.20 dev offloadbm proto bgp metric 20 
      10.150.5.0/24 nhid 18477 via 192.168.111.22 dev offloadbm proto bgp metric 20 
      192.168.111.64 nhid 18480 via 192.168.111.23 dev offloadbm proto bgp metric 20 
      192.168.111.237 nhid 18481 via 192.168.111.24 dev offloadbm proto bgp metric 20

      Actual results: egressIP1 was de-advertised, advertised egressIP1 was not removed from bgp routing table

      Expected results: egressIP1 should be de-advertised, egressIP1 route should be removed from bgp routing table

      Additional info:

      By comparison, this problem does not happen to EIP on default network, after unlabeling one of two egress nodes, its egressIP got de-advertised

       

      must-gather: https://drive.google.com/file/d/10EXwyKeFniJEV4ra90lHWo1CQP_C9nfC/view?usp=drive_link

       

       

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              jcaamano@redhat.com Jaime Caamaño Ruiz
              jechen@redhat.com Jean Chen
              None
              None
              Jean Chen Jean Chen
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: