Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-59865

[BGP]pod canot access external loadbalance once the default network is advertesied

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Proposed
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      When cluster is using ELB(external load balancer) for ingress/api,  pod cannot access to ELB after BGP advertised. 

      this issue can be hit on baremetal with ELB  or like aws cluster mentioned in https://issues.redhat.com/browse/CORENET-6086?focusedId=27448808&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-27448808

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. setup cluster on baremetal with ELB, here I'm using a continer haproxy as ELB
      # podman exec extlb ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if191: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
          link/ether c2:51:78:eb:ee:2c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 192.168.111.100/24 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::c051:78ff:feeb:ee2c/64 scope link 
             valid_lft forever preferred_lft forever
      
      # podman exec extlb cat /etc/haproxy/haproxy.cfg
      defaults
          mode                    tcp
          log                     global
          timeout connect         10s
          timeout client          1m
          timeout server          1m
      frontend main
          bind :::6443 v4v6
          default_backend api
      frontend ingress
          bind :::8080  v4v6
          default_backend ingress
      frontend https_ingress
          bind :::443 v4v6
          default_backend https_ingress
      frontend machine-config-server
          bind *:22623
          default_backend machine-config-server
      backend machine-config-server
          option  httpchk GET /readyz HTTP/1.0
          option  log-health-checks
          balance roundrobin
          server bootstrp 192.168.111.31:22623 check check-ssl inter 1s fall 2 rise 3 verify none
          server master-0 192.168.111.20:22623 check check-ssl inter 1s fall 2 rise 3 verify none
          server master-1 192.168.111.21:22623 check check-ssl inter 1s fall 2 rise 3 verify none
          server master-2 192.168.111.22:22623 check check-ssl inter 1s fall 2 rise 3 verify none
      backend api
          option  httpchk GET /readyz HTTP/1.0
          option  log-health-checks
          balance roundrobin
          server bootstrp 192.168.111.31:6443 check check-ssl inter 1s fall 2 rise 3 verify none 
          server master-0 192.168.111.20:6443 check check-ssl inter 1s fall 2 rise 3 verify none
          server master-1 192.168.111.21:6443 check check-ssl inter 1s fall 2 rise 3 verify none
          server master-2 192.168.111.22:6443 check check-ssl inter 1s fall 2 rise 3 verify none
      backend ingress
          option  httpchk GET /healthz/ready  HTTP/1.0
          option  log-health-checks
          balance roundrobin
          server w-0 192.168.111.23:80 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none
          server w-1 192.168.111.24:80 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none
          server w-2 192.168.111.25:80 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none
      backend https_ingress
          option  httpchk GET /healthz/ready  HTTP/1.0
          option  log-health-checks
          balance roundrobin
          server w-0 192.168.111.23:443 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none
          server w-1 192.168.111.24:443 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none
          server w-2 192.168.111.25:443 check check-ssl port 1936 inter 1s fall 2 rise 3 verify none 
      1. pod can access the elb before adverteised the default network
      # oc rsh -n z1 test-rc-6pl5s
      ~ $ curl -I https://192.168.111.100:443 -k
      HTTP/1.0 503 Service Unavailable
      pragma: no-cache
      cache-control: private, max-age=0, no-cache, no-store
      content-type: text/html
       
      1.  advertised BGP for default network
      # oc get ra
      NAME      STATUS
      default   Accepted
       
      1.  check step 2 again
      ~ $ curl -I https://192.168.111.100:443 -k --connect-timeout 3
      curl: (28) Connection timeout after 3000 ms
       
      1.  we can also see authentication become not ready

       

      Actual results:

      1. oc get co
        NAME                                       VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
        authentication                             4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   False       False         False      102s    OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.sdn150.openshift-qe.sdn.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
        baremetal                                  4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      87m     
        cloud-controller-manager                   4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      91m     
        cloud-credential                           4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      98m     
        cluster-autoscaler                         4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      87m     
        config-operator                            4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      88m     
        console                                    4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   False       False         False      2m12s   RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.sdn150.openshift-qe.sdn.com): Get "https://console-openshift-console.apps.sdn150.openshift-qe.sdn.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
        control-plane-machine-set                  4.20.0-0-2025-07-24-132133-test-ci-ln-12f6zpt-latest   True        False         False      87m     

       

      Expected results:

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn't need to read the entire case history.
      • Don't presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with "sbr-triaged"
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with "sbr-untriaged"
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label "SDN-Jira-template"
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

       
       
       

              jcaamano@redhat.com Jaime Caamaño Ruiz
              zzhao1@redhat.com Zhanqi Zhao
              None
              None
              Zhanqi Zhao Zhanqi Zhao
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: