Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-55926

[BGP] UDN pod cannot be accessed from same network pod with BGP adverteised when node reboot

XMLWordPrintable

    • None
    • None
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

       UDN pod cannot be accessed with BGP adverteised when the pod located node reboot.

      # oc get ra cudn
      NAME   STATUS
      cudn   Accepted
      
      # oc get pod -n blue -o wide
      NAME            READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
      test-rc-f9vcq   1/1     Running   5          171m   10.128.2.18   worker-0   <none>           <none>
      test-rc-mnscz   1/1     Running   0          171m   10.129.2.10   worker-2   <none>           <none>0 
      
      # oc rsh -n blue test-rc-mnscz ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:81:02:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.129.2.10/23 brd 10.129.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe81:20a/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:14:64:04:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 20.100.4.5/24 brd 20.100.4.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 fe80::858:14ff:fe64:405/64 scope link 
             valid_lft forever preferred_lft forever
      
      # oc rsh -n blue test-rc-f9vcq ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:02:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.2.18/23 brd 10.128.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:212/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:14:64:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 20.100.0.5/24 brd 20.100.0.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 fe80::858:14ff:fe64:5/64 scope link 
             valid_lft forever preferred_lft forever
      
      #### after worker-0 is reboot and back to ready
      
      #### pod (reboot node) cannot be accessed from another pod
      # oc exec -n blue test-rc-mnscz -- curl 20.100.0.5:8080 --connect-timeout 2
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
        0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
      curl: (28) Connection timeout after 2001 ms
      command terminated with exit code 28
      
      #### pod can be accessed from external router
      
      # curl 20.100.0.5:8080
      Hello OpenShift!
      

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. create cudn and namespace with test 2 pods to make them schedule 2 nodes
      2. Create RA to advertised BGP
      3. select one pod A node to reboot
      4. after node back to ready
      5. from pod B to access Pod A

      Actual results:

      podA cannot be access from podB

      Expected results:

      podA can be access from podB

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              pdiak@redhat.com Patryk Diak
              zzhao1@redhat.com Zhanqi Zhao
              None
              None
              Zhanqi Zhao Zhanqi Zhao
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: