Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-56405

HyperShift node recreation left old node-gateway-router-lrp-ifaddr in OVN causing traffic drops

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • CORENET Sprint 271, CORENET Sprint 272, CORENET Sprint 273
    • 3
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      HyperShift Cluster on KubeVirt Platform on Baremetal

      Management Cluster - OpenShift 4.17.15
      Hosted Control Planes - OpenShift 4.16.34

      pod to external and pod to pod traffic is experiencing a 40-50% request failure rate from one node in the cluster. Testing from a terminal on the VM and a terminal on the baremetal where the VM is running shows no request failures to the same external target.

      Using ovnkube-trace we can see that "lr_in_ip_routing" is triggering a decision between 2 options.

      Option 1: reg0 = 100.65.0.5
      Option 2: reg0 = 100.65.0.11

      Option 1 Log - full failure - /* No MAC binding. */

      ingress(dp="ovn_cluster_router", inport="rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x")
      -----------------------------------------------------------------------------------------
       0. lr_in_admission (northd.c:12106): eth.dst == { 0a:58:a9:fe:01:01, 0a:58:0a:3b:08:01 } && inport == "rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x" && is_chassis_resident("cr-rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"), priority 50, uuid 89e821ed
          xreg0[0..47] = 0a:58:0a:3b:08:01;
          next;
       1. lr_in_lookup_neighbor (northd.c:12294): 1, priority 0, uuid 6451619f
          reg9[2] = 1;
          next;
       2. lr_in_learn_neighbor (northd.c:12304): reg9[2] == 1, priority 100, uuid 694583cd
          mac_cache_use;
          next;
      12. lr_in_ip_routing_pre (northd.c:12547): 1, priority 0, uuid 21f82081
          reg7 = 0;
          next;
      13. lr_in_ip_routing (northd.c:10711): ip4.src == 10.58.0.0/15, priority 45, uuid 7ced382d
          ip.ttl--;
          flags.loopback = 1;
          reg8[0..15] = 1;
          reg8[16..31] = select(1=100, 2=100);
      
      select: reg8[16..31] = 1 /* Randomly selected. Use --select-id to specify. */
      -----------------------------------------------------------------------------
      14. lr_in_ip_routing_ecmp (northd.c:10755): reg8[0..15] == 1 && reg8[16..31] == 1, priority 100, uuid 1ecf9acb
          reg0 = 100.65.0.5;
          reg1 = 100.65.0.1;
          eth.src = 0a:58:64:41:00:01;
          outport = "rtoj-ovn_cluster_router";
          next;
      15. lr_in_policy (northd.c:12782): 1, priority 0, uuid 8ee58100
          reg8[0..15] = 0;
          next;
      16. lr_in_policy_ecmp (northd.c:12785): reg8[0..15] == 0, priority 150, uuid d8a2bd5c
          next;
      17. lr_in_arp_resolve (northd.c:12825): ip4, priority 1, uuid 577f969a
          get_arp(outport, reg0);
          /* No MAC binding. */
      

      Option 2 Log - success - eth.dst = 0a:58:64:41:00:0b

      ingress(dp="ovn_cluster_router", inport="rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x")
      -----------------------------------------------------------------------------------------
       0. lr_in_admission (northd.c:12106): eth.dst == { 0a:58:a9:fe:01:01, 0a:58:0a:3b:08:01 } && inport == "rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x" && is_chassis_resident("cr-rtos-ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"), priority 50, uuid 89e821ed
          xreg0[0..47] = 0a:58:0a:3b:08:01;
          next;
       1. lr_in_lookup_neighbor (northd.c:12294): 1, priority 0, uuid 6451619f
          reg9[2] = 1;
          next;
       2. lr_in_learn_neighbor (northd.c:12304): reg9[2] == 1, priority 100, uuid 694583cd
          mac_cache_use;
          next;
      12. lr_in_ip_routing_pre (northd.c:12547): 1, priority 0, uuid 21f82081
          reg7 = 0;
          next;
      13. lr_in_ip_routing (northd.c:10711): ip4.src == 10.58.0.0/15, priority 45, uuid 7ced382d
          ip.ttl--;
          flags.loopback = 1;
          reg8[0..15] = 1;
          reg8[16..31] = select(1=100, 2=100);
      
      select: reg8[16..31] = 2 /* Randomly selected. Use --select-id to specify. */
      -----------------------------------------------------------------------------
      14. lr_in_ip_routing_ecmp (northd.c:10755): reg8[0..15] == 1 && reg8[16..31] == 2, priority 100, uuid 5bf9bc34
          reg0 = 100.65.0.11;
          reg1 = 100.65.0.1;
          eth.src = 0a:58:64:41:00:01;
          outport = "rtoj-ovn_cluster_router";
          next;
      15. lr_in_policy (northd.c:12782): 1, priority 0, uuid 8ee58100
          reg8[0..15] = 0;
          next;
      16. lr_in_policy_ecmp (northd.c:12785): reg8[0..15] == 0, priority 150, uuid d8a2bd5c
          next;
      17. lr_in_arp_resolve (northd.c:13052): outport == "rtoj-ovn_cluster_router" && reg0 == 100.65.0.11, priority 100, uuid a855f32d
          eth.dst = 0a:58:64:41:00:0b;
          next;
      21. lr_in_arp_request (northd.c:13460): 1, priority 0, uuid 7a8dd6ca
          output;
      

      Looking at the addresses in the NBDB, there is no 100.65.0.5 where as 100.65.0.11 is assigned to the Node experiencing the issue.

      c9bb0b60-05e6-41b7-861e-bbc347879c07":{"external_ids":["map",[["ip-family","v4"],["k8s.ovn.org/id","default-network-controller:Namespace:openshift-host-network:v4"],["k8s.ovn.org/name","openshift-host-network"],["k8s.ovn.org/owner-controller","default-network-controller"],["k8s.ovn.org/owner-type","Namespace"]]],"addresses":["set",["10.58.0.2","10.58.2.2","10.58.4.2","10.58.6.2","10.58.8.2","10.59.0.2","10.59.2.2","10.59.6.2","10.59.8.2","100.65.0.10","100.65.0.11","100.65.0.2","100.65.0.3","100.65.0.4","100.65.0.6","100.65.0.7","100.65.0.8","100.65.0.9"]],"name":"a6910206611978007605"}
      

      We can confirm that one the node as well:

          k8s.ovn.org/node-gateway-router-lrp-ifaddr: '{"ipv4":"100.65.0.11/16"}'
      

      Further in the NBDB, we can see the entry for the dst.mac 0a:58:64:41:00:0b:

      "81cc3ed0-59fe-49b9-b515-6281ff757160":{"mac":"0a:58:64:41:00:0b","networks":"100.65.0.11/16","name":"rtoj-GR_ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x","options":["map",[["gateway_mtu","1400"]]]}},"_comment":"compacting database online","Static_MAC_Binding":{"eec5a8c1-644f-494e-aeef-a5ba96530b12":{"mac":"0a:58:a9:fe:a9:04","ip":"169.254.169.4","override_dynamic_mac":true,"logical_port":"rtoe-GR_ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"}},"_date":1747195865068,"Logical_Router_Static_Route":{"eb1c1e4c-a8fe-4e14-ae28-e4ba0e184c41":{"external_ids":["map",[["ic-node","ocp-wdc-n1-int-1-compute-b-b9fc3788-72lr2"]]],"ip_prefix":"10.59.0.0/23","nexthop":"100.88.0.10"},"b2a184d9-15a8-4c2f-85d6-3d24095074c0":{"output_port":"rtoe-GR_ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x","ip_prefix":"169.254.169.0/29","nexthop":"169.254.169.4"},"1cbe6579-6edc-4911-ab0c-ae8623b670bd":{"external_ids":["map",[["ic-node","ocp-wdc-n1-int-1-infra-f81fddd2-tm42x"]]],"ip_prefix":"100.65.0.8/32","nexthop":"100.88.0.8"},"861eb65a-353d-4437-8b65-aa23ecc6770c":{"nexthop":"100.65.0.1","ip_prefix":"10.58.0.0/15"},"f7c35221-ed69-4fd7-a2b6-c0ab4557fb3b":{"policy":"src-ip","ip_prefix":"10.58.0.0/15","nexthop":"100.65.0.11"}
      

      Looking for 100.65.0.5, there is just:

      6bd70a8d-f043-46f9-a5a4-a5d3ec59ecdd":{"policy":"src-ip","ip_prefix":"10.58.0.0/15","nexthop":"100.65.0.5"}
      7d80e260-ea3b-4485-9482-8754d41c0ff2":{"nexthop":"100.65.0.5","ip_prefix":"100.65.0.5"},
      

      So there are 2 joinSubnet addresses for this host and traffic is being randomly selected between each of them, one works and the other dead ends and is dropped.

      Digging further into what happened, we could see the Node in question was 4 days old, yet the VM was 54 days old.

      Looking at the VM's journal, we could see that the VM was powered down and then ~13 minutes later, booted back up again:

      May 09 17:56:26 ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x systemd-logind[958]: System is powering down.
      -- Boot 59f262a85fb342e5a9a6d59f819aae04 --
      May 09 18:09:12 localhost kernel: Linux version 5.14.0-427.50.1.el9_4.x86_64 (mockbuild@x86-64-02.build.eng.rdu2.redhat.com) (gcc (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3), GNU ld version 2.35.2-43.el9) #1 SMP PREEMPT_DYNAMIC Wed Dec 18 13:06:23 EST 2024
      

      When the kubelet starts, we see the node "ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x" is no longer found.

      May 09 18:14:58 ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x kubenswrapper[2522]: E0509 18:14:58.224478    2522 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x\" not found" node="ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"
      

      We can see it removed in the kube-scheduler log 3 minutes after the VM shutdown:

      2025-05-09T17:59:24.543620729Z I0509 17:59:24.543557       1 node_tree.go:79] "Removed node in listed group from NodeTree" node="ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x" zone=":\x00:compute-b"
      

      Then we see the node, with the same name inside the original VM, successfully register back into the cluster:

      May 09 18:15:08 ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x kubenswrapper[2553]: I0509 18:15:08.673610    2553 kubelet_node_status.go:77] "Attempting to register node" node="ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"
      May 09 18:15:08 ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x kubenswrapper[2553]: I0509 18:15:08.679736    2553 kubelet_node_status.go:80] "Successfully registered node" node="ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x"
      

      When this happens OVN assigns and annotates 100.65.0.11 to the new node:

      2025-05-09T18:15:08.685454414Z I0509 18:15:08.685263       1 kube.go:128] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.65.0.11/16"} k8s.ovn.org/node-id:11 k8s.ovn.org/node-transit-switch-port-ifaddr:{"ipv4":"100.88.0.11/16"}] on node ocp-wdc-n1-int-1-compute-b-b9fc3788-xkx2x
      

      The original node, which was removed and replaced by a node of the same name, had the original 'node-transit-switch-port-ifaddr' IP of 100.65.0.5 that seems to not have been full cleaned up.

      Version-Release number of selected component (if applicable):

      4.16.34

      How reproducible:

      Has happened twice, trigger is not 100% known.

      100% reproducable once in the bad state

      Steps to Reproduce:
      1.
      2.
      3.

      Actual results:
      Around 50% of traffic is dropped / lost

      Expected results:
      No traffic loss

      Additional info:

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.

              rravaiol@redhat.com Riccardo Ravaioli
              rhn-support-mrobson Matt Robson
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: