Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-57042

[BGP] switch from SGW (with accepted L2 RA) to LGW, ovnk pod will crash

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • No
    • None
    • None
    • Rejected
    • CORENET Sprint 274
    • 1
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem: create L2 RA in SGW and then switch to LGW, ovnkube-node pod will crash

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. create L2 RA in SGW

      oc get ra
      NAME      STATUS
      cudn      Accepted
      

      2. then switch to LGW, L2 RA show Not Accepted status

      oc get ra
      NAME      STATUS
      cudn      Not Accepted: configuration error: BGP is currently not supported for Layer2 networks in local gateway mode, network: cluster_udn_l2-network-cudn1
      

      3. ovnkube-node pod will crash and its log show

      oc get pods -n openshift-ovn-kubernetes
      NAME                                    READY   STATUS             RESTARTS      AGE
      ovnkube-control-plane-6f64fcb6f-689lm   2/2     Running            0             2m31s
      ovnkube-control-plane-6f64fcb6f-kcllh   2/2     Running            0             2m32s
      ovnkube-node-7287x                      7/8     CrashLoopBackOff   4 (53s ago)   2m30s
      ovnkube-node-ckxvd                      8/8     Running            0             5m4s
      ovnkube-node-rgmct                      8/8     Running            0             3m47s
      ovnkube-node-td52z                      8/8     Running            0             4m7s
      ovnkube-node-vtk4v                      8/8     Running            0             4m45s
      ovnkube-node-zccp9                      8/8     Running            0             4m25s
      
      $ oc logs ovnkube-node-7287x -c ovnkube-controller -n openshift-ovn-kubernetes
      ...
      F0604 02:30:51.883664  910646 ovnkube.go:138] failed to run ovnkube: [failed to start network controller: failed to start NAD Controller :initial sync failed: failed to sync network cluster_udn_l2-network-cudn1: failed to fetch other network information for network cluster_udn_l2-network-cudn1: failed to reconcile network "cluster_udn_l2-network-cudn1": RouteAdvertisements "cudn" not in accepted status, failed to start node network controller: failed to init default node network controller: failed to run kubelet restart tracker: read unix @->/run/systemd/private: use of closed network connection]
      

      Actual results: ovnkube-node pod will crash

      Expected results: ovnkube-node pod shouldn't crash

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              rhn-support-arghosh Arnab Ghosh
              rh-ee-meinli Meina Li
              None
              None
              Meina Li Meina Li
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved:

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0 minutes
                  0m
                  Logged:
                  Time Spent - 1 day
                  1d