Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-55075

More than one label in the CUDN is not honored, no NAD is created in namespaces with label

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Moderate
    • No
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      If a CUDN is created with more than one items with matchLabels, status reports success but NADs are not created in namespaces.

      Version-Release number of selected component (if applicable):

      Tested in 4.19 but would be in 4.18 also

       oc version

      Client Version: 4.15.9
      Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
      Server Version: 4.19.0-0.nightly-2025-04-16-042029
      Kubernetes Version: v1.32.3

      How reproducible:

      Always

      Steps to Reproduce:

      1. Create two namespaces with different labels 

       oc get ns -l dept=qa

      NAME   STATUS   AGE
      as1    Active   2m43s

      oc get ns -l team=qa

      NAME   STATUS   AGE
      as2    Active   2m56s

       

      2. Create a CUDN with YAML below:-

       

         apiVersion: k8s.ovn.org/v1
          kind: ClusterUserDefinedNetwork
          metadata:
            name: l3-network
          spec:
            namespaceSelector:
              matchLabels:
                dept: qa
                team: qa
            network:
              topology: Layer3
              layer3:
                role: Primary
                mtu: 1300
                subnets:
                  - cidr: 10.150.0.0/16
                    hostSubnet: 24

       

      Actual results:

      CUDN reports success but NAD is not created in namespaces and list of namespaces is empty.

       oc describe clusteruserdefinednetwork l3-network 

      Name:         l3-network
      Namespace:    
      Labels:       <none>
      Annotations:  <none>
      API Version:  k8s.ovn.org/v1
      Kind:         ClusterUserDefinedNetwork
      Metadata:
        Creation Timestamp:  2025-04-16T15:15:13Z
        Finalizers:
          k8s.ovn.org/user-defined-network-protection
        Generation:        1
        Resource Version:  57194
        UID:               7a651d6b-f45f-49d8-9f80-9c6bf3519f79
      Spec:
        Namespace Selector:
          Match Labels:
            Dept:  qa
            Team:  qa
        Network:
          layer3:
            Mtu:   1300
            Role:  Primary
            Subnets:
              Cidr:         10.150.0.0/16
              Host Subnet:  24
          Topology:         Layer3
      Status:
        Conditions:
          Last Transition Time:  2025-04-16T15:15:13Z
          Message:               NetworkAttachmentDefinition has been created in following namespaces: []
          Reason:                NetworkAttachmentDefinitionCreated
          Status:                True
          Type:                  NetworkCreated
      Events:                    <none>

      oc -n as1 get net-attach-def

      No resources found in as1 namespace.
       
      

      oc -n as2 get net-attach-def

      No resources found in as2 namespace.
       
      

      Expected results:

      NAD should be created successfully in the namespaces with respective labels.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              bbennett@redhat.com Ben Bennett
              rhn-support-asood Arti Sood
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: