Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-54640

Worker nodes stay NotReady with "upgrade hack: unable to find LRSR for node" from OVN-Kubernetes

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Critical
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Azure Red Hat OpenShift worker nodes never come up due to OVN-Kubernetes / CNI failures:

      message: 'container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
              message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/.
              Has your network provider started?' 

      Looking at OVN-Kubernetes, the following error is logged:

      ovnkube.go:136] failed to run ovnkube: failed to start node network controller: failed to start default node network controller: upgrade hack: failed while waiting for the remote ovnkube-controller to be ready: context deadline exceeded, upgrade hack: unable to find LRSR for node <node name> 

      OVN pods never become fully ready (the ovnkube-controller pod specifically never becomes ready)

      ./oc get pods -n openshift-ovn-kubernetes
      NAME                                     READY   STATUS        RESTARTS          AGE
      ovnkube-control-plane-5f5b57d5c6-tmfwv   2/2     Running       1 (71m ago)       128m
      ovnkube-control-plane-5f5b57d5c6-wvsm6   2/2     Running       2                 129m
      ovnkube-node-5j6ct                       8/9     Running       565 (2m49s ago)   2d14h
      ovnkube-node-6lpnd                       9/9     Running       18                15d
      ovnkube-node-6tbvv                       8/9     Running       239 (3m7s ago)    26h
      ovnkube-node-976m7                       8/9     Running       353 (5m20s ago)   40h
      ovnkube-node-bskhz                       0/9     Terminating   0                 3d14h
      ovnkube-node-glcqf                       8/9     Running       341 (2m23s ago)   38h
      ovnkube-node-sh26d                       9/9     Running       9                 15d
      ovnkube-node-w8nzb                       8/9     Running       3 (5m21s ago)     21m
      ovnkube-node-xz6jd                       8/9     Running       318 (4m43s ago)   34h
      ovnkube-node-zhqzg                       9/9     Running       18                15d 

      The code erroring appears to be here: ovn-kubernetes/go-controller/pkg/node/default_node_network_controller.go at master · ovn-kubernetes/ovn-kubernetes · GitHub

      Version-Release number of selected component (if applicable):

      ./oc get clusterversion
      NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
      version   4.16.37   True        False         15d     Error while reconciling 4.16.37: an unknown error has occurred: MultipleErrors 

      How reproducible:

      Unable to reproduce, is customer cluster. We have linked a must-gather and openshift-ovn-kubernetes ns inspect in the Jira comments.

      Steps to Reproduce:

      1.

      2.

      3.

      Actual results:

      All worker nodes get NotReady, as a result, routers can't schedule and customer unable to reach the cluster

      Expected results:

      Worker nodes / CNI comes up ready

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              rhn-support-cmarches Caden Marchese
              rhn-support-cmarches Caden Marchese
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: