Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-36865

[UDN] Restarting ovn pods will break the primary network traffic

XMLWordPrintable

    • Important
    • No
    • SDN Sprint 257
    • 1
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Version-Release number of selected component (if applicable):
      Build the cluster with PR openshift/ovn-kubernetes#2223,openshift/cluster-network-operator#2433, enable TechPreview feature gate

      How reproducible:
      Always
      Steps to Reproduce:

      1. Create namespace ns1,ns2,ns3

      2. Create NAD under ns1,ns2,ns3 with

       % oc get net-attach-def -n ns1 -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
          creationTimestamp: "2024-07-11T08:35:13Z"
          generation: 1
          name: l3-network-ns1
          namespace: ns1
          resourceVersion: "165141"
          uid: 8eca76bf-ee30-4a0e-a892-92a480086aa1
        spec:
          config: |
            {
                    "cniVersion": "0.3.1",
                    "name": "l3-network-ns1",
                    "type": "ovn-k8s-cni-overlay",
                    "topology":"layer3",
                    "subnets": "10.200.0.0/16/24",
                    "mtu": 1300,
                    "netAttachDefName": "ns1/l3-network-ns1",
                    "role": "primary"
            }
      kind: List
      metadata:
        resourceVersion: ""
      
      % oc get net-attach-def -n ns2 -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
          creationTimestamp: "2024-07-11T08:35:19Z"
          generation: 1
          name: l3-network-ns2
          namespace: ns2
          resourceVersion: "165183"
          uid: 944b50b1-106f-4683-9cea-450521260170
        spec:
          config: |
            {
                    "cniVersion": "0.3.1",
                    "name": "l3-network-ns2",
                    "type": "ovn-k8s-cni-overlay",
                    "topology":"layer3",
                    "subnets": "10.200.0.0/16/24",
                    "mtu": 1300,
                    "netAttachDefName": "ns2/l3-network-ns2",
                    "role": "primary"
            }
      kind: List
      metadata:
        resourceVersion: ""
      
      % oc get net-attach-def -n ns3 -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
          creationTimestamp: "2024-07-11T08:35:26Z"
          generation: 1
          name: l3-network-ns3
          namespace: ns3
          resourceVersion: "165257"
          uid: 93683aac-7f8a-4263-b0f6-ed9182c5c47c
        spec:
          config: |
            {
                    "cniVersion": "0.3.1",
                    "name": "l3-network-ns3",
                    "type": "ovn-k8s-cni-overlay",
                    "topology":"layer3",
                    "subnets": "10.200.0.0/16/24",
                    "mtu": 1300,
                    "netAttachDefName": "ns3/l3-network-ns3",
                    "role": "primary"
            }
      kind: List
      metadata:
      
      

      3. Create test pods under ns1,ns2,ns3
      Using below yaml to create pods under ns1

      % cat data/udn/list-for-pod.json 
      {
          "apiVersion": "v1",
          "kind": "List",
          "items": [
              {
                  "apiVersion": "v1",
                  "kind": "ReplicationController",
                  "metadata": {
                      "labels": {
                          "name": "test-rc"
                      },
                      "name": "test-rc"
                  },
                  "spec": {
                      "replicas": 2,
                      "template": {
                          "metadata": {
                              "labels": {
                                  "name": "test-pods"
                              },
                            "annotations": { "k8s.v1.cni.cncf.io/networks": "l3-network-ns1"}
                          },
                          "spec": {
                              "containers": [
                                  {
                                      "image": "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4",
                                      "name": "test-pod",
                                      "imagePullPolicy": "IfNotPresent"
                                      }
                              ]
                          }
                      }
                  }
              },
              {
                  "apiVersion": "v1",
                  "kind": "Service",
                  "metadata": {
                      "labels": {
                          "name": "test-service"
                      },
                      "name": "test-service"
                  },
                  "spec": {
                      "ports": [
                          {
                              "name": "http",
                              "port": 27017,
                              "protocol": "TCP",
                              "targetPort": 8080
                          }
                      ],
                      "selector": {
                          "name": "test-pods"
                      }
                  }
              }
          ]
      }
       oc get pods -n ns1 
      NAME            READY   STATUS    RESTARTS   AGE
      test-rc-5ns7z   1/1     Running   0          3h7m
      test-rc-bxf2h   1/1     Running   0          3h7m
      
      Using below yaml to create a pod in ns2
      % cat data/udn/podns2.yaml 
      kind: Pod
      apiVersion: v1
      metadata:
        name: hello-pod-ns2
        namespace: ns2
        annotations:
          k8s.v1.cni.cncf.io/networks: l3-network-ns2
        labels:
          name: hello-pod-ns2
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - image: "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4"
          name: hello-pod-ns2
          securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
      
      Using below yaml to create a pod in ns3
      % cat data/udn/podns3.yaml 
      kind: Pod
      apiVersion: v1
      metadata:
        name: hello-pod-ns3
        namespace: ns3
        annotations:
          k8s.v1.cni.cncf.io/networks: l3-network-ns3
        labels:
          name: hello-pod-ns3
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - image: "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4"
          name: hello-pod-ns3
          securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
      
      

      4. Test the pods connection in primary network in ns1, it worked well

      % oc rsh -n ns1 test-rc-5ns7z  
      ~ $ ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if157: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:02:1e brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.2.30/23 brd 10.128.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:21e/64 scope link 
             valid_lft forever preferred_lft forever
      3: net1@if158: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:c8:01:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.200.1.3/24 brd 10.200.1.255 scope global net1
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fec8:103/64 scope link 
             valid_lft forever preferred_lft forever
      ~ $ exit
       % oc rsh -n ns1 test-rc-bxf2h  
      ~ $ ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if123: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:83:00:0c brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.131.0.12/23 brd 10.131.1.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe83:c/64 scope link 
             valid_lft forever preferred_lft forever
      3: net1@if124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:c8:02:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.200.2.3/24 brd 10.200.2.255 scope global net1
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fec8:203/64 scope link 
             valid_lft forever preferred_lft forever
      ~ $ ping 10.200.1.3
      PING 10.200.1.3 (10.200.1.3) 56(84) bytes of data.
      64 bytes from 10.200.1.3: icmp_seq=1 ttl=62 time=3.20 ms
      64 bytes from 10.200.1.3: icmp_seq=2 ttl=62 time=1.06 ms
      ^C
      --- 10.200.1.3 ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1001ms
      rtt min/avg/max/mdev = 1.063/2.131/3.199/1.068 ms
      
      

      5. Restart all ovn pods
      % oc delete pods --all -n openshift-ovn-kubernetes
      pod "ovnkube-control-plane-97f479fdc-qxh2g" deleted
      pod "ovnkube-control-plane-97f479fdc-shkcm" deleted
      pod "ovnkube-node-b4crf" deleted
      pod "ovnkube-node-k2lzs" deleted
      pod "ovnkube-node-nfnhn" deleted
      pod "ovnkube-node-npltt" deleted
      pod "ovnkube-node-pgz4z" deleted
      pod "ovnkube-node-r9qbl" deleted

      % oc get pods -n openshift-ovn-kubernetes
      NAME READY STATUS RESTARTS AGE
      ovnkube-control-plane-97f479fdc-4cxkc 2/2 Running 0 43s
      ovnkube-control-plane-97f479fdc-prpcn 2/2 Running 0 43s
      ovnkube-node-g2x5q 8/8 Running 0 41s
      ovnkube-node-jdpzx 8/8 Running 0 40s
      ovnkube-node-jljrd 8/8 Running 0 41s
      ovnkube-node-skd9g 8/8 Running 0 40s
      ovnkube-node-tlkgn 8/8 Running 0 40s
      ovnkube-node-v9qs2 8/8 Running 0 39s

      Check pods connection in primary network in ns1 again

      Actual results:
      The connection was broken in primary network

      % oc rsh -n ns1 test-rc-bxf2h 
      ~ $ ping  10.200.1.3
      PING 10.200.1.3 (10.200.1.3) 56(84) bytes of data.
      From 10.200.2.3 icmp_seq=1 Destination Host Unreachable
      From 10.200.2.3 icmp_seq=2 Destination Host Unreachable
      From 10.200.2.3 icmp_seq=3 Destination Host Unreachable
      

      Expected results:
      The connection was not broken in primary network.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

            pliurh Peng Liu
            huirwang Huiran Wang
            Huiran Wang Huiran Wang
            Jaime Caamaño Ruiz
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated: