Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-34839

[4.16] BANP takes precedence over multicast

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • No
    • None
    • None
    • Rejected
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:
      [4.16] BANP takes precedence over multicast
      Version-Release number of selected component (if applicable):
      4.16.0-0.nightly-2024-06-03-060250
      How reproducible:
      Always

      Steps to Reproduce:
      1. created a namespace and enable multicast

      # oc describe ns mcast
      Name:         mcast
      Labels:       kubernetes.io/metadata.name=mcast
                    pod-security.kubernetes.io/audit=privileged
                    pod-security.kubernetes.io/audit-version=v1.24
                    pod-security.kubernetes.io/enforce=privileged
                    pod-security.kubernetes.io/enforce-version=v1.24
                    pod-security.kubernetes.io/warn=privileged
                    pod-security.kubernetes.io/warn-version=v1.24
                    security.openshift.io/scc.podSecurityLabelSync=false
      Annotations:  k8s.ovn.org/multicast-enabled: true
                    openshift.io/sa.scc.mcs: s0:c27,c9
                    openshift.io/sa.scc.supplemental-groups: 1000720000/10000
                    openshift.io/sa.scc.uid-range: 1000720000/10000
      Status:       Active
      
      No resource quota.
      
      No LimitRange resource.
      

      2. Created two test pods in namespace mcast

       % oc get pods -n mcast -o wide
      NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
      mcast-daemonset-7gpj2   1/1     Running   0          26s   10.128.2.19   ip-10-0-47-147.us-east-2.compute.internal   <none>           <none>
      mcast-daemonset-9d8hz   1/1     Running   0          26s   10.129.2.22   ip-10-0-19-199.us-east-2.compute.internal   <none>           <none>
      

      3. Before create banp, from both pods to send multicast with omping, we still can see the the pods's IP joined group as below.

      % oc rsh -n mcast mcast-daemonset-7gpj2
      / # omping -c10 10.128.2.19  10.129.2.22   
      10.129.2.22 : waiting for response msg
      10.129.2.22 : joined (S,G) = (*, 232.43.211.234), pinging
      10.129.2.22 :   unicast, seq=1, size=69 bytes, dist=2, time=0.821ms
      10.129.2.22 :   unicast, seq=2, size=69 bytes, dist=2, time=0.961ms
      10.129.2.22 :   unicast, seq=3, size=69 bytes, dist=2, time=1.008ms
      10.129.2.22 :   unicast, seq=4, size=69 bytes, dist=2, time=0.919ms
      10.129.2.22 :   unicast, seq=5, size=69 bytes, dist=2, time=1.009ms
      10.129.2.22 :   unicast, seq=6, size=69 bytes, dist=2, time=0.949ms
      10.129.2.22 :   unicast, seq=7, size=69 bytes, dist=2, time=0.983ms
      10.129.2.22 :   unicast, seq=8, size=69 bytes, dist=2, time=0.969ms
      10.129.2.22 :   unicast, seq=9, size=69 bytes, dist=2, time=0.984ms
      10.129.2.22 :   unicast, seq=10, size=69 bytes, dist=2, time=0.939ms
      10.129.2.22 : given amount of query messages was sent
      
      10.129.2.22 :   unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 0.821/0.954/1.009/0.055
      10.129.2.22 : multicast, xmt/rcv/%loss = 10/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
      
      % oc rsh -n mcast mcast-daemonset-9d8hz
      / # omping -c10 10.129.2.22  10.128.2.19 
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : joined (S,G) = (*, 232.43.211.234), pinging
      10.128.2.19 :   unicast, seq=1, size=69 bytes, dist=2, time=0.782ms
      10.128.2.19 :   unicast, seq=2, size=69 bytes, dist=2, time=0.898ms
      10.128.2.19 :   unicast, seq=3, size=69 bytes, dist=2, time=0.947ms
      10.128.2.19 :   unicast, seq=4, size=69 bytes, dist=2, time=0.910ms
      10.128.2.19 :   unicast, seq=5, size=69 bytes, dist=2, time=0.899ms
      10.128.2.19 :   unicast, seq=6, size=69 bytes, dist=2, time=0.859ms
      10.128.2.19 :   unicast, seq=7, size=69 bytes, dist=2, time=0.937ms
      10.128.2.19 :   unicast, seq=8, size=69 bytes, dist=2, time=1.011ms
      10.128.2.19 :   unicast, seq=9, size=69 bytes, dist=2, time=1.014ms
      10.128.2.19 :   unicast, seq=10, size=69 bytes, dist=2, time=0.985ms
      10.128.2.19 : given amount of query messages was sent
      
      10.128.2.19 :   unicast, xmt/rcv/%loss = 10/10/0%, min/avg/max/std-dev = 0.782/0.924/1.014/0.071
      10.128.2.19 : multicast, xmt/rcv/%loss = 10/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
      

      4. After create banp for deny traffic in namespace mcast

      % oc get banp -o yaml
      apiVersion: v1
      items:
      - apiVersion: policy.networking.k8s.io/v1alpha1
        kind: BaselineAdminNetworkPolicy
        metadata:
          creationTimestamp: "2024-06-04T03:55:40Z"
          generation: 1
          name: default
          resourceVersion: "86122"
          uid: 1f698cfc-c9e0-47a3-b2b7-2427fe6d1230
        spec:
          egress:
          - action: Deny
            name: default-deny-ns
            to:
            - namespaces:
                matchLabels:
                  kubernetes.io/metadata.name: mcast
          ingress:
          - action: Deny
            from:
            - namespaces:
                matchLabels:
                  kubernetes.io/metadata.name: mcast
            name: default-deny-ns
          subject:
            namespaces:
              matchLabels:
                kubernetes.io/metadata.name: mcast
        status:
          conditions:
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-37-35.us-east-2.compute.internal
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-71-15.us-east-2.compute.internal
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-47-147.us-east-2.compute.internal
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-9-250.us-east-2.compute.internal
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-80-47.us-east-2.compute.internal
          - lastTransitionTime: "2024-06-04T03:55:40Z"
            message: Setting up OVN DB plumbing was successful
            reason: SetupSucceeded
            status: "True"
            type: Ready-In-Zone-ip-10-0-19-199.us-east-2.compute.internal
      kind: List
      metadata:
        resourceVersion: ""
      

      5. Repeat the step to send the mulicast from both pods

      % oc rsh -n mcast mcast-daemonset-7gpj2
      / # omping -c10 10.128.2.19  10.129.2.22   
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      10.129.2.22 : waiting for response msg
      ^C
      10.129.2.22 : response message never received
      
      oc rsh -n mcast mcast-daemonset-9d8hz
      / # omping -c10 10.129.2.22  10.128.2.19 
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      10.128.2.19 : waiting for response msg
      

      Actual results:
      omping multicast traffic was blocked by BANP

      Expected results:
      BANP should not block multicast traffic.

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”

              sseethar Surya Seetharaman
              huirwang Huiran Wang
              None
              None
              Huiran Wang Huiran Wang
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: