Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-37955

[Pre-Merge-Testing] UDN: failed to add logical port of Pod for primary NAD

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Rejected
    • SDN Sprint 258, SDN Sprint 259, SDN Sprint 260, SDN Sprint 261, SDN Sprint 262
    • 5
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:
      The pod is stuck in ContainerCreating status with error log in ovn log "failed to add logical port of Pod ns2/test-rc-lmx8w for NAD ns2/l3-network-ns2", the same subnet configured in two namespaces with '10.200.0.0/27/29'

      Version-Release number of selected component (if applicable):
      the cluster is built with PR https://github.com/openshift/api/pull/1988
      4.17.0-0.ci.test-2024-08-05-015438-ci-ln-c2zspkk-latest

      How reproducible:

      Steps to Reproduce:

      1. Create a namespace ns1 and NAD in it

      % oc get net-attach-def -n ns1 -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
          creationTimestamp: "2024-08-05T04:38:45Z"
          generation: 1
          name: l3-network-ns1
          namespace: ns1
          resourceVersion: "70883"
          uid: dc94b942-6624-4a31-babd-6e6cfa023769
        spec:
          config: |
            {
                    "cniVersion": "0.3.1",
                    "name": "l3-network-ns1",
                    "type": "ovn-k8s-cni-overlay",
                    "topology":"layer3",
                    "subnets": "10.200.0.0/27/29",
                    "mtu": 1300,
                    "netAttachDefName": "ns1/l3-network-ns1",
                    "role": "primary"
            }
      kind: List
      metadata:
        resourceVersion: ""
      

      2. Create test pods in ns1

      % oc get pods -n ns1
      NAME            READY   STATUS    RESTARTS   AGE
      test-rc-n9hc4   1/1     Running   0          70m
      test-rc-t7j72   1/1     Running   0          70m
      

      3. Create namespace ns2 and NAD in ns2

      % oc get net-attach-def -n ns2 -o yaml
      apiVersion: v1
      items:
      - apiVersion: k8s.cni.cncf.io/v1
        kind: NetworkAttachmentDefinition
        metadata:
          creationTimestamp: "2024-08-05T04:44:35Z"
          generation: 1
          name: l3-network-ns2
          namespace: ns2
          resourceVersion: "72784"
          uid: 24d96e0d-9797-4115-81b7-28919664c3b3
        spec:
          config: |
            {
                    "cniVersion": "0.3.1",
                    "name": "l3-network-ns2",
                    "type": "ovn-k8s-cni-overlay",
                    "topology":"layer3",
                    "subnets": "10.200.0.0/27/29",
                    "mtu": 1300,
                    "netAttachDefName": "ns2/l3-network-ns2",
                    "role": "primary"
            }
      kind: List
      metadata:
        resourceVersion: ""
      

      4. Create pods in ns2

       % oc get pods -n ns2
      NAME            READY   STATUS              RESTARTS   AGE
      test-rc-7wgtd   1/1     Running             0          65m
      test-rc-lmx8w   0/1     ContainerCreating   0          65m
      
      oc describe pod test-rc-lmx8w  
        Normal   Scheduled               66m   default-scheduler  Successfully assigned ns2/test-rc-lmx8w to hrw-0805a-t9sgt-worker-c-jmc4h
        Warning  FailedCreatePodSandBox  64m   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_test-rc-lmx8w_ns2_c1f38d97-8922-4fa7-a9d1-e35660e28f95_0(b5d1ae7855d675ce561b3154164738bac4011516e6d797343d7cea9a9f20ff95): error adding pod ns2_test-rc-lmx8w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b5d1ae7855d675ce561b3154164738bac4011516e6d797343d7cea9a9f20ff95" Netns:"/var/run/netns/6aef35bc-cec0-4b77-aab2-be2a5889c7ce" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=ns2;K8S_POD_NAME=test-rc-lmx8w;K8S_POD_INFRA_CONTAINER_ID=b5d1ae7855d675ce561b3154164738bac4011516e6d797343d7cea9a9f20ff95;K8S_POD_UID=c1f38d97-8922-4fa7-a9d1-e35660e28f95" Path:"" ERRORED: error configuring pod [ns2/test-rc-lmx8w] networking: [ns2/test-rc-lmx8w/c1f38d97-8922-4fa7-a9d1-e35660e28f95:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[ns2/test-rc-lmx8w b5d1ae7855d675ce561b3154164738bac4011516e6d797343d7cea9a9f20ff95 network default NAD default] [ns2/test-rc-lmx8w b5d1ae7855d675ce561b3154164738bac4011516e6d797343d7cea9a9f20ff95 network default NAD default] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded
      
      

      3. Check ovn log
      oc logs ovnkube-node-rw9wq -n openshift-ovn-kubernetes -c ovnkube-controller

      grep test-rc-lmx8w
      I0805 04:44:44.515379    3613 base_network_controller_pods.go:476] [default/ns2/test-rc-lmx8w] creating logical port ns2_test-rc-lmx8w for pod on switch hrw-0805a-t9sgt-worker-c-jmc4h
      I0805 04:44:44.515691    3613 kube.go:315] Updating pod ns2/test-rc-lmx8w
      I0805 04:44:44.530077    3613 pod.go:62] [ns2/test-rc-lmx8w] pod update took 14.419834ms
      I0805 04:44:44.530119    3613 base_network_controller_pods.go:893] [default/ns2/test-rc-lmx8w] addLogicalPort annotation time took 14.465378ms
      I0805 04:44:44.531530    3613 pods.go:241] [ns2/test-rc-lmx8w] addLogicalPort took 16.183542ms, libovsdb time 1.046883ms
      I0805 04:44:44.531669    3613 base_network_controller_secondary.go:284] [ns2/test-rc-lmx8w] addLogicalPort for NAD ns2/l3-network-ns2 took 12.422µs, libovsdb time 0s
      E0805 04:44:44.531696    3613 obj_retry.go:671] Failed to update *v1.Pod, old=ns2/test-rc-lmx8w, new=ns2/test-rc-lmx8w, error: failed to add logical port of Pod ns2/test-rc-lmx8w for NAD ns2/l3-network-ns2: timed out waiting for logical switch in logical switch cache "l3.network.ns2_hrw-0805a-t9sgt-worker-c-jmc4h" subnet: error getting logical switch l3.network.ns2_hrw-0805a-t9sgt-worker-c-jmc4h: switch not in logical switch cache
      I0805 04:44:44.531791    3613 base_network_controller_secondary.go:284] [ns2/test-rc-lmx8w] addLogicalPort for NAD ns2/l3-network-ns2 took 7.589µs, libovsdb time 0s
      E0805 04:44:44.531810    3613 obj_retry.go:671] Failed to update *v1.Pod, old=ns2/test-rc-lmx8w, new=ns2/test-rc-lmx8w, error: failed to add logical port of Pod ns2/test-rc-lmx8w for NAD ns2/l3-network-ns2: timed out waiting for logical switch in logical switch cache "l3.network.ns2_hrw-0805a-t9sgt-worker-c-jmc4h" subnet: error getting logical switch l3.network.ns2_hrw-0805a-t9sgt-worker-c-jmc4h: switch not in logical switch cache
      
      

      Actual results:

      Expected results:

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              npinaeva@redhat.com Nadia Pinaeva (Inactive)
              huirwang Huiran Wang
              None
              None
              Huiran Wang Huiran Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: