Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-55028

[BGP pre-merge regression]EgressIP Multi-NIC doesn't work on pre-merge BGP image after FRR as additionalRoutingCapability is enabled in CNO

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:
      The EIP Multi NIC auto cases failed on a BGP enabled and advertised default network cluster, and the issue can be manually reproduced. And later after removing RA, retest the scenario, the EIP for secondary nic still cannot work.

      Version-Release number of selected component (if applicable):
      BGP Image registry.build06.ci.openshift.org/ci-ln-2v7dkvb/release:latest

      How reproducible:
      Always

      Steps to Reproduce:

      1. On a BGP enabled cluster
      2. Enable ipForwarding to Global

      # oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}}' --type=merge
      network.operator.openshift.io/cluster patched
      

      2. Advertise default network

      apiVersion: k8s.ovn.org/v1
      kind: RouteAdvertisements
      metadata:
        name: default
      spec:
        networkSelectors:
          - networkSelectionType: DefaultNetwork
        nodeSelector: {}
        frrConfigurationSelector: {}
        advertisements:
        - "PodNetwork"
      
      
      
      # ip r show proto bgp
      10.128.0.0/23 via 192.168.111.20 dev sriovbm metric 20 
      10.128.2.0/23 via 192.168.111.23 dev sriovbm metric 20 
      10.129.0.0/23 via 192.168.111.21 dev sriovbm metric 20 
      10.129.2.0/23 via 192.168.111.25 dev sriovbm metric 20 
      10.130.0.0/23 via 192.168.111.22 dev sriovbm metric 20 
      10.131.0.0/23 via 192.168.111.24 dev sriovbm metric 20 
      

      3. Label one worker node as egress node

      4. Create an egressIP object and egressIP is on secondary interface

      # oc get egressip
      NAME         EGRESSIPS      ASSIGNED NODE   ASSIGNED EGRESSIPS
      egressip-2   172.22.0.200   worker-2        172.22.0.200
      
      # oc get egressip -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.ovn.org/v1
        kind: EgressIP
        metadata:
          annotations:
            k8s.ovn.org/egressip-mark: "50002"
          creationTimestamp: "2025-04-15T10:25:17Z"
          generation: 2
          name: egressip-2
          resourceVersion: "102677"
          uid: 53011e10-675f-462b-8cb7-c184e6b18f9b
        spec:
          egressIPs:
          - 172.22.0.200
          namespaceSelector:
            matchLabels:
              name: qe
        status:
          items:
          - egressIP: 172.22.0.200
            node: worker-2
      kind: List
      metadata:
        resourceVersion: ""
      

      5. Create a namespace test and test pods in it, before add egress label
      from test pod can access the bastion host.

      # oc rsh -n test test-rc-8vq56 
      ~ $ curl 172.22.0.1
      <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
      <html><head>
      <title>404 Not Found</title>
      </head><body>
      <h1>Not Found</h1>
      <p>The requested URL was not found on this server.</p>
      </body></html>
      

      6. Apply egress label to namespace test

      # oc label ns test name=qe
      namespace/test labeled
      

      7. The egress traffic got broken.

      # oc rsh -n test test-rc-8vq56 
      ~ $ curl 172.22.0.1
      curl: (7) Failed to connect to 172.22.0.1 port 80 after 3077 ms: Host is unreachable
      

      8. Remove RA

      # oc get ra
      NAME      STATUS
      default   Accepted
      # oc delete ra default
      routeadvertisements.k8s.ovn.org "default" deleted
      # 
      # ip r show proto bgp
      # 
      

      9. Delete egressIP and namespace test,

      # oc delete ns test
      namespace "test" deleted
      # oc delete egressip --all
      egressip.k8s.ovn.org "egressip-2" deleted
      # oc get egressip
      No resources found
      
      

      10. Repeat the egressIP test again

      # oc get egressip
      NAME         EGRESSIPS      ASSIGNED NODE   ASSIGNED EGRESSIPS
      egressip-2   172.22.0.200   worker-2        172.22.0.200
      
      # oc create ns test
      namespace/test created
      
      # oc get pods -n test
      NAME            READY   STATUS    RESTARTS   AGE
      test-rc-5gvk5   1/1     Running   0          13s
      test-rc-8vq56   1/1     Running   0          13s
      # oc rsh -n test test-rc-8vq56 
      ~ $ curl 172.22.0.1
      <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
      <html><head>
      <title>404 Not Found</title>
      </head><body>
      <h1>Not Found</h1>
      <p>The requested URL was not found on this server.</p>
      </body></html>
      
      
      

      11. After add egress label to namespace, egress traffic got broken

      # oc label ns test name=qe
      namespace/test labeled
      # oc rsh -n test test-rc-8vq56 
      ~ $ curl 172.22.0.1
      curl: (7) Failed to connect to 172.22.0.1 port 80 after 3077 ms: Host is unreachable
      

      Actual results:
      Egress traffic got broken when pods were applied egressIP for secondary NIC

      Expected results:
      EgressIP should work well for secondary NIC.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              jcaamano@redhat.com Jaime Caamaño Ruiz
              huirwang Huiran Wang
              None
              None
              Huiran Wang Huiran Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: