Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-44176

Multi Network Policy - multiple policy edits are not respected

XMLWordPrintable

    • None
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      When creating a MultiNetworkPolicy (MNP), it is respected. After editing it once - changes are respected. If the MNP is edited for the second time - changes are no longer respected.

       

      Version-Release number of selected component (if applicable):

      v4.18, might be seen in older versions, I didn't check.

      Seen both on PSI and BM clusters.

      $ oc get clusterversion                                                                                                                      
      NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
      version   4.18.0-ec.3   True        False         25h     Cluster version is 4.18.0-ec.3

       

      How reproducible:

      Create a layer2 network, connect 2 pods to the network. Create a policy that affect that network and edit it several times.

       

      Steps to Reproduce:

      1. Create a ns:

      oc create ns mnp-on-pods

      2. Create:

       -  Layer2 network

       -  pod1-mnp

       -  pod2-mnp

       -  MNP affecting pod2-mnp

       

      cat << EOF | oc create -f -

      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: flat-l2-nad
        namespace: mnp-on-pods
      spec:
        config: '

      {"cniVersion": "0.3.1", "type": "ovn-k8s-cni-overlay", "name": "flat-l2-nad",     "topology": "layer2", "subnets": "192.168.100.0/29", "mtu": 1300, "netAttachDefName": "mnp-on-pods/flat-l2-nad"}

      '

      apiVersion: v1
      kind: Pod
      metadata:
        name: pod1-mnp
        namespace: mnp-on-pods
        annotations:
          k8s.v1.cni.cncf.io/networks: flat-l2-nad
      spec:
        containers:
        - image: "quay.io/openshifttest/httpbin:1.2.2"
          command: ["sleep", "3600"]
          imagePullPolicy: IfNotPresent
          name: alpine
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
        securityContext:
          fsGroup: 107
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
            

      apiVersion: v1
      kind: Pod
      metadata:
        name: pod2-mnp
        namespace: mnp-on-pods
        annotations:
          k8s.v1.cni.cncf.io/networks: flat-l2-nad
      spec:
        containers:
        - image: "quay.io/openshifttest/httpbin:1.2.2"
          command: ["sleep", "3600"]
          imagePullPolicy: IfNotPresent
          name: alpine
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
        securityContext:
          fsGroup: 107
          runAsNonRoot: true
          runAsUser: 1000
          seccompProfile:
            type: RuntimeDefault
            

      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        annotations:
          k8s.v1.cni.cncf.io/policy-for: mnp-on-pods/flat-l2-nad
        name: pod2-ingress-mnp
        namespace: mnp-on-pods
      spec:
        ingress:
        - from:
          - ipBlock:
              cidr: 192.168.100.123/32
        podSelector:
          matchLabels:
            pod: mnp-pod
        policyTypes:
        - Ingress

      EOF

       

      3. Ping from pod1 to pod 2 -> Should show no connectivity.

      4. Keep the ping going in the pod and edit the MNP to allow ingress connectivity from pod1 IP address -> Ping gets replies successfully.

      5. Edit the MNP again, to only allow incoming ingress from address 192.168.100.123/32.

       

      Actual results:

      Ping is partially successful - ping reply every 2 seconds instead of 1.

       

      Expected results:

      Ping should fail - should be no connectivity from pod1-mnp to pod2-mnp.

       

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              sdn-team-bot sdn-team bot
              rh-ee-awax Anat Wax
              Weibin Liang Weibin Liang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: