Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-49746

Creating CUDN with mismatch topology should fail

XMLWordPrintable

    • None
    • Rejected
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Creating CUDN with mismatch spec.topology and the topology config succeed, but it should failed because its invalid.

      See below examples.

      Version-Release number of selected component (if applicable):

      4.18

      How reproducible:

      100%

      Steps to Reproduce:

      1. Create CUDN CR with spec.topology mismatch topology configuration:

      Example 1:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
       name: mynet
      spec:
       namespaceSelector:
        matchLabels:
         "kubernetes.io/metadata.name": "red"
       network:
        topology: Layer2 # <--- spec.topology should match
        layer3: # <------------ topology configuration type
         role: Primary
         subnets: [{cidr: 192.168.112.12/24}] 

      Example 2:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
       name: mynet
      spec:
       namespaceSelector:
        matchLabels:
         "kubernetes.io/metadata.name": "red"
       network:
        topology: Layer3 # <--- spec.topology should match
        layer2: # <------------ topology configuration type
         role: Secondary
         subnets: [192.168.112.12/24]  

      Actual results:

      The CUDN is created successfully.

      ovn-kubernetes control-plane pod get into crush looping due to the following panic:

      panic: runtime error: invalid memory address or nil pointer dereference
      [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1dd8415]
      
      goroutine 12154 [running]:
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/util/udn.IsPrimaryNetwork({0x2d8dfd0, 0xc0059746a8})
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/util/udn/udn.go:17 +0x55
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork.(*Controller).updateNAD(0xc0007340f0, {0x2dcad90, 0xc005974580}, {0xc000012480, 0x3})
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork/controller_helper.go:24 +0x94
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork.(*Controller).syncClusterUDN(0xc0007340f0, 0xc005974420)
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork/controller.go:604 +0xa10
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork.(*Controller).reconcileCUDN(0xc0007340f0, {0xc00651c2d6, 0x5})
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/clustermanager/userdefinednetwork/controller.go:519 +0xff
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/controller.(*controller[...]).processNextQueueItem(0x19a93e0)
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/controller/controller.go:253 +0xd7
      github.com/ovn-org/ovn-kubernetes/go-controller/pkg/controller.(*controller[...]).startWorkers.func1()
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/controller/controller.go:163 +0x6f
      created by github.com/ovn-org/ovn-kubernetes/go-controller/pkg/controller.(*controller[...]).startWorkers in goroutine 7794
          /home/omergi/workspace/github.com/ovn-kubernetes/go-controller/pkg/controller/controller.go:160 +0x185 

       

      Expected results:

      Creating CUDN with mismatch spec.topology and topology configuration should fail - at the API level.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              omergi@redhat.com Or Mergi
              omergi@redhat.com Or Mergi
              Anurag Saxena Anurag Saxena
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: