Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-52268

[CUDN CRD API] Issue in specifying multiple namespace in L2 CUDN.

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Moderate
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:
      Issue when specifying multiple namespace in CUDN L2 CR where it is not considering first namespace.  

      Version-Release number of selected component (if applicable):
      build openshift/api#2005

      How reproducible:
      Always

      Steps to Reproduce:

      1. Create CUDN CRD with multiple namespace in it.

      2. Check the yaml of CUDN CR and you could see only one NS in it.

      Actual results:
      Mention multiple namespace in CUDN CR:

      Labels 'k8s.ovn.org/primary-user-defined-network=' are already added to below NS:
       
      [root@bastion ~]# oc get ns --show-labels |grep -i primary-user-defined-network
      cudn-ns-1                                          Active   18h     color=green,k8s.ovn.org/primary-user-defined-network=,kubernetes.io/metadata.name=cudn-ns-1,pod-security.kubernetes.io/audit-version=latest,pod-security.kubernetes.io/audit=restricted,pod-security.kubernetes.io/warn-version=latest,pod-security.kubernetes.io/warn=restricted
      cudn-ns-2                                          Active   18h     color=blue,k8s.ovn.org/primary-user-defined-network=,kubernetes.io/metadata.name=cudn-ns-2,pod-security.kubernetes.io/audit-version=latest,pod-security.kubernetes.io/audit=restricted,pod-security.kubernetes.io/warn-version=latest,pod-security.kubernetes.io/warn=restricted
      
      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: udn-test
      spec:
        namespaceSelector: 
          matchLabels: 
            kubernetes.io/metadata.name: "cudn-ns-2"
            kubernetes.io/metadata.name: "cudn-ns-1"
        network: 
          topology: Layer2 
          layer2: 
            role: Primary 
            subnets:
      
              - "10.100.0.0/16" 
      
      
      Here only one NS is added to CUDN CR even-though, I there was 2 NS in my above yaml:
       
      [root@bastion ~]# oc get clusteruserdefinednetworks.k8s.ovn.org udn-test -oyaml
      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        annotations:
          kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"k8s.ovn.org/v1","kind":"ClusterUserDefinedNetwork","metadata":{"annotations":{},"name":"udn-test"},"spec":{"namespaceSelector":{"matchLabels":{"kubernetes.io/metadata.name":"cudn-ns-1"}},"network":{"layer2":{"role":"Primary","subnets":["10.100.0.0/16"]},"topology":"Layer2"}}}
        creationTimestamp: "2025-03-04T07:45:27Z"
        finalizers:
        - k8s.ovn.org/user-defined-network-protection
        generation: 1
        name: udn-test
        resourceVersion: "1230873"
        uid: 81d74be8-877f-4117-895a-bd58e21801c8
      spec:
        namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: cudn-ns-1
        network:
          layer2:
            role: Primary
            subnets:
            - 10.100.0.0/16
          topology: Layer2
      status:
        conditions:
        - lastTransitionTime: "2025-03-04T07:45:27Z"
          message: 'NetworkAttachmentDefinition has been created in following namespaces:
            [cudn-ns-1]'
          reason: NetworkAttachmentDefinitionCreated
          status: "True"
          type: NetworkCreated
       

      Expected results:
      Both NS should be part of the CUDN CR.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure 
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              bbennett@redhat.com Ben Bennett
              rhn-support-dtorne Devdatta Torne
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: