Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-58404

[OCP 4.16] Multicast changed back to enable once the traffics are trigered even disable it for namespace which from sdn migration to ovn-k

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • 3
    • Important
    • No
    • None
    • None
    • None
    • CORENET Sprint 273
    • 1
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      For namespace enabled multicast after migration from sdn to ovn.  even the multicast of the namespace was disable. it will be back to enable once the multicast traffics are triggered.

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. oc create namespace multicast-test with openshift-sdn cluster
      2. enable multicast-test by `oc annotate netnamespace multicast-test netnamespace.network.openshift.io/multicast-enabled=true`
      3. create 3 test pods with iperf in namespace

             

       oc get pod -n multicast-test -o wide
      NAME              READY   STATUS    RESTARTS   AGE   IP            NODE                                          NOMINATED NODE   READINESS GATES
      iperf-test7hclm   1/1     Running   0          52m   10.129.0.12   openshift-qe-027.sriov.openshift-qe.sdn.com   <none>           <none>
      iperf-test9cns2   1/1     Running   0          52m   10.128.0.7    openshift-qe-024.lab.eng.rdu2.redhat.com      <none>           <none>
      iperf-testmn67f   1/1     Running   0          52m   10.128.0.8    openshift-qe-024.lab.eng.rdu2.redhat.com      <none>           <none> 

        4. Do offline migration from sdn to ovn-k and no issue found 

         5. From two pods receive the multicast package in two terminal by rsh into pod by `iperf -s -u -B 239.0.0.1`

      # oc rsh -n multicast-test iperf-test9cns2
      / # ip a
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host 
             valid_lft forever preferred_lft forever
      2: eth0@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default 
          link/ether 0a:58:0a:80:00:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0
          inet 10.128.0.7/23 brd 10.128.1.255 scope global eth0
             valid_lft forever preferred_lft forever
          inet6 fe80::858:aff:fe80:7/64 scope link 
             valid_lft forever preferred_lft forever
      / # iperf -s -u -B 239.0.0.1
      ------------------------------------------------------------
      Server listening on UDP port 5001
      Joining multicast group  239.0.0.1
      Server set to single client traffic mode (per multicast receive)
      UDP buffer size:  208 KByte (default)
      ------------------------------------------------------------
      [  1] local 239.0.0.1 port 5001 connected with 10.128.0.8 port 60822 

       and in third pod to send by `iperf -c 239.0.0.1 -u -i 1 -b 40g -t 300`

      / # iperf -c 239.0.0.1 -u -i 1 -b 40g -t 300
      ------------------------------------------------------------
      Client connecting to 239.0.0.1, UDP port 5001
      Sending 1470 byte datagrams, IPG target: 0.29 us (kalman adjust)
      UDP buffer size:  208 KByte (default)
      ------------------------------------------------------------
      [ ID] Interval       Transfer     Bandwidth
      [  1] 0.00-1.00 sec   223 MBytes  1.87 Gbits/sec
      [  1] 1.00-2.00 sec   222 MBytes  1.87 Gbits/sec
      [  1] 2.00-3.00 sec   223 MBytes  1.87 Gbits/sec
      [  1] 3.00-4.00 sec   223 MBytes  1.87 Gbits/sec
      [  1] 4.00-5.00 sec   223 MBytes  1.87 Gbits/sec
      [  1] 5.00-6.00 sec   221 MBytes  1.85 Gbits/sec
      [  1] 6.00-7.00 sec   223 MBytes  1.87 Gbits/sec
      [  1] 7.00-8.00 sec   224 MBytes  1.88 Gbits/sec
      [  1] 8.00-9.00 sec   221 MBytes  1.85 Gbits/sec
      [  1] 9.00-10.00 sec   225 MBytes  1.89 Gbits/sec
       

      6. Disable the multicast for namespace by `oc annotate namespace multicast-test k8s.ovn.org/multicast-enabled-`

      7.  watch the namespace `watch oc get namespace multicast-test -o yaml` ,  the annotation `multicast-test k8s.ovn.org/multicast-enabled=true` will be back later.  

      must-gather logs:  https://drive.google.com/drive/folders/1-A__HkUi9GtH09cA04_gZZpOm98-l2_5

      Actual results:

      Expected results:

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

       
       
       

              sdn-team-bot sdn-team bot
              zzhao1@redhat.com Zhanqi Zhao
              None
              None
              Zhanqi Zhao Zhanqi Zhao
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: