Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-37535

[UDN] udn is broken on dual stack clusters

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem: It seems all UDN pods in different namespaces are getting same network instead of distinct.

      Version-Release number of selected component (if applicable): 4.17

      How reproducible: Always

      Steps to Reproduce:

      1. Create a dual stack cluster via build 4.17,openshift/ovn-kubernetes#2233

      2.

      3.

      Actual results: UDN fails on dual stack clusters

      Expected results: UDN should work on dual stack clusters

      Additional info:

       

      anusaxen@anusaxen:~$ cat udn3.yaml 
      apiVersion: v1
      kind: Namespace
      metadata:
        name: ns1
        labels:
          name: ns1
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: ns2
        labels:
          name: ns2
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: ns3
        labels:
          name: ns3
      ---
      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: l3-network-ns1
        namespace: ns1
      spec:
        config: |2
          {
                  "cniVersion": "0.3.1",
                  "name": "l3-network-ns1",
                  "type": "ovn-k8s-cni-overlay",
                  "topology":"layer3",
                  "subnets": "10.150.0.0/16/24,2010:100:200::0/60",
                  "mtu": 1300,
                  "netAttachDefName": "ns1/l3-network-ns1",
                  "role": "primary"
          }
      ---
      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: l3-network-ns2
        namespace: ns2
      spec:
        config: |2
          {
                  "cniVersion": "0.3.1",
                  "name": "l3-network-ns2",
                  "type": "ovn-k8s-cni-overlay",
                  "topology":"layer3",
                  "subnets": "10.150.0.0/16/24,2010:100:200::0/60",
                  "mtu": 1300,
                  "netAttachDefName": "ns2/l3-network-ns2",
                  "role": "primary"
          }
      ---
      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: l3-network-ns3
        namespace: ns3
      spec:
        config: |2
          {
                  "cniVersion": "0.3.1",
                  "name": "l3-network-ns3",
                  "type": "ovn-k8s-cni-overlay",
                  "topology":"layer3",
                  "subnets": "10.150.0.0/16/24,2010:100:200::0/60",
                  "mtu": 1300,
                  "netAttachDefName": "ns3/l3-network-ns3",
                  "role": "primary"
          }
      ---
      kind: Pod
      apiVersion: v1
      metadata:
        name: hello-pod-ns1
        namespace: ns1
        labels:
          name: hello-pod
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - image: "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4"
          name: hello-pod-ns1
          securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
      ---
      kind: Pod
      apiVersion: v1
      metadata:
        name: hello-pod-ns2
        namespace: ns2
        labels:
          name: hello-pod
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - image: "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4"
          name: hello-pod-ns2
          securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
      ---
      kind: Pod
      apiVersion: v1
      metadata:
        name: hello-pod-ns3
        namespace: ns3
        labels:
          name: hello-pod
      spec:
        securityContext:
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
        containers:
        - image: "quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4"
          name: hello-pod-ns3
          securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
      ---
      anusaxen@anusaxen:~/git/network-check$ oc exec -n ns1 hello-pod-ns1 -- ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:02:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.2.22/23 brd 10.128.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:5::16/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:216/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:96:04:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.150.4.3/24 brd 10.150.4.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 2010:100:200:4::3/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe96:403/64 scope link 
             valid_lft forever preferred_lft forever
      anusaxen@anusaxen:~/git/network-check$ oc exec -n ns2 hello-pod-ns2 -- ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:02:17 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.2.23/23 brd 10.128.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:5::17/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:217/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:96:04:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.150.4.3/24 brd 10.150.4.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 2010:100:200:4::3/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe96:403/64 scope link 
             valid_lft forever preferred_lft forever
      anusaxen@anusaxen:~/git/network-check$ oc exec -n ns3 hello-pod-ns3 -- ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if35: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:02:18 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.2.24/23 brd 10.128.3.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fd01:0:0:5::18/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:218/64 scope link 
             valid_lft forever preferred_lft forever
      3: ovn-udn1@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:96:04:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.150.4.3/24 brd 10.150.4.255 scope global ovn-udn1
             valid_lft forever preferred_lft forever
          inet6 2010:100:200:4::3/64 scope global 
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe96:403/64 scope link 
             valid_lft forever preferred_lft forevernusaxen@anusaxen:~$ oc rsh ovnkube-node-nn2l4
      Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init)
      sh-5.1# ovn-nbctl list ACL | grep -i udn
      external_ids        : {direction=Egress, "k8s.ovn.org/id"="default-network-controller:UDNIsolation:AllowHostARPSecondary:Egress", "k8s.ovn.org/name"=AllowHostARPSecondary, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=UDNIsolation}
      external_ids        : {direction=Egress, "k8s.ovn.org/id"="default-network-controller:UDNIsolation:DenySecondary:Egress", "k8s.ovn.org/name"=DenySecondary, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=UDNIsolation}
      external_ids        : {direction=Ingress, "k8s.ovn.org/id"="default-network-controller:UDNIsolation:DenySecondary:Ingress", "k8s.ovn.org/name"=DenySecondary, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=UDNIsolation}
      external_ids        : {direction=Ingress, "k8s.ovn.org/id"="default-network-controller:UDNIsolation:AllowHostARPSecondary:Ingress", "k8s.ovn.org/name"=AllowHostARPSecondary, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=UDNIsolation}
      external_ids        : {direction=Ingress, "k8s.ovn.org/id"="default-network-controller:UDNIsolation:AllowHostSecondary:Ingress", "k8s.ovn.org/name"=AllowHostSecondary, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=UDNIsolation}sh-5.1# ovn-nbctl lr-list               
      27c7a8a7-a403-43e7-9cbf-9c629c3892eb (GR_master-01.anuragpkt2d.qe.devcluster.openshift.com)
      39e93260-f794-4f1f-a131-7ebac1fbc477 (l3.network.ns1_ovn_cluster_router)
      6413bef2-e610-4230-b490-cd63dbd66f16 (l3.network.ns2_ovn_cluster_router)
      ad0594e7-77da-4d8c-8903-379b6ec33126 (l3.network.ns3_ovn_cluster_router)
      cee07e6d-6a6f-48b8-808a-6b5ef892eb12 (ovn_cluster_router)sh-5.1# ovn-nbctl lr-route-list 39e93260-f794-4f1f-a131-7ebac1fbc477
      IPv4 Routes
      Route Table <main>:
                  10.150.0.0/24                100.88.0.3 dst-ip
                  10.150.2.0/24                100.88.0.2 dst-ip
                  10.150.3.0/24                100.88.0.5 dst-ip
                  10.150.4.0/24                100.88.0.6 dst-ipIPv6 Routes
      Route Table <main>:
              2010:100:200::/64                   fd97::3 dst-ip
            2010:100:200:2::/64                   fd97::2 dst-ip
            2010:100:200:3::/64                   fd97::5 dst-ip
            2010:100:200:4::/64                   fd97::6 dst-ip
      sh-5.1# 
       

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sseethar Surya Seetharaman
              anusaxen Anurag Saxena
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: