Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-38647

Multi network policy breaks connectivity between VMs over live migration

XMLWordPrintable

    • 0.42
    • False
    • Hide

      None

      Show
      None
    • False
    • CNV-51202 - Seamless live migration
    • ---
    • ---
    • No

      Description of problem:

      When creating a server-client scenario, creating a multi network policy breaks the connectivity (NC in this case) between the server and the client.

      Version-Release number of selected component (if applicable):

      v4.15.0

      How reproducible:

      flaky - about 70%

      Steps to Reproduce:

      1. Create a Namespace:
      oc create new-project flat-l2
      
      2. Create a NAD:
      cat << EOF | oc create -f -
      apiVersion: k8s.cni.cncf.io/v1
      kind: NetworkAttachmentDefinition
      metadata:
        name: flat-l2-nad-sec
      spec:
        config: |
          {
            "cniVersion":"0.4.0",
            "name": "flat-l2-network2",
            "netAttachDefName": "flat-l2/flat-l2-nad-sec",
            "topology": "layer2",
            "type": "ovn-k8s-cni-overlay"
          }
      EOF
      
      3. Create VM vmc connected to the flat-l2-nad-sec network:
      cat << EOF | oc create -f -
      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        creationTimestamp: null
        labels:
          kubevirt.io/vm: vmc
        name: vmc
      spec:
        running: true
        template:
          metadata:
            creationTimestamp: null
            labels:
              kubevirt.io/domain: vmc
              kubevirt.io/vm: vmc
          spec:
            domain:
              devices:
                disks:
                - disk:
                    bus: virtio
                  name: containerdisk
                - disk:
                    bus: virtio
                  name: cloudinitdisk
                interfaces:
                - masquerade: {}
                  name: default
                - bridge: {}
                  name: flatl2-overlay
                rng: {}
              machine:
                type: ''
              resources:
                requests:
                  memory: 1024Mi
            networks:
            - name: default
              pod: {}
            - multus:
                networkName: flat-l2-nad-sec
              name: flatl2-overlay
            termination/GracePeriodSeconds: 30
            volumes:
            - containerDisk:
                image: quay.io/openshift-cnv/qe-cnv-tests-fedora:39
              name: containerdisk
            - cloudInitNoCloud:
                networkData: |
                  ethernets:
                    eth1:
                      addresses:
                      - 10.200.0.3/24
                  version: 2
                userData: |-
                  #cloud-config
                  user: fedora
                  password: password
                  chpasswd: { expire: False }
              name: cloudinitdisk
      EOF
      
      4. Create VM vmd connected to the flat-l2-nad-sec network (change the node selector to match your first worker node):
      cat << EOF | oc create -f -
      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        creationTimestamp: null
        labels:
          kubevirt.io/vm: vmd
        name: vmd
      spec:
        running: true
        template:
          metadata:
            creationTimestamp: null
            labels:
              kubevirt.io/domain: vmd
              kubevirt.io/vm: vmd
          spec:
            domain:
              devices:
                disks:
                - disk:
                    bus: virtio
                  name: containerdisk
                - disk:
                    bus: virtio
                  name: cloudinitdisk
                interfaces:
                - masquerade: {}
                  name: default
                - bridge: {}
                  name: flatl2-overlay
                rng: {}
              machine:
                type: ''
              resources:
                requests:
                  memory: 1024Mi
            networks:
            - name: default
              pod: {}
            - multus:
                networkName: flat-l2-nad-sec
              name: flatl2-overlay
            termination/GracePeriodSeconds: 30
            volumes:
            - containerDisk:
                image: quay.io/openshift-cnv/qe-cnv-tests-fedora:39
              name: containerdisk
            - cloudInitNoCloud:
                networkData: |
                  ethernets:
                    eth1:
                      addresses:
                      - 10.200.0.4/24
                  version: 2
                userData: |-
                  #cloud-config
                  user: fedora
                  password: password
                  chpasswd: { expire: False }
              name: cloudinitdisk
            nodeSelector:
              kubernetes.io/hostname: n-awax-415-4-74t5n-worker-0-c2n7r
      EOF
      
      5. Create a MNP affecting vmc (the server) to only allow input coming from vmd's IP address (the client), on a specific port:
      cat << EOF | oc create -f -
      apiVersion: k8s.cni.cncf.io/v1beta1
      kind: MultiNetworkPolicy
      metadata:
        name: ingress-ipblock
        annotations:
          k8s.v1.cni.cncf.io/policy-for: flat-l2/flat-l2-nad-sec
      spec:
        podSelector:
          matchLabels:
              kubevirt.io/vm: vmc
        policyTypes:
        - Ingress
        ingress:
        - from:
          - ipBlock:
              cidr: 10.200.0.4/32
          ports:
            - protocol: TCP
              port: 1200
      EOF
      
      6. Create a connection between the server and the client VMs:
      6.a. On the server (vmc), listen on the port defined in the MNP (1200):
      for i in {1..40}; do echo -e "HTTP/1.1 200 OK-${i}\n\n" | nc -lp 1200; done
      6.b. On the client VM vmd created in step 4, send http GET requests to the server:
      for i in {1..20};  do echo -e "GET http://10.200.0.3:1200 HTTP/1.0\n\n" | nc 10.200.0.3 1200 -d 1 >> packets_log.log ; done
      
      7. Migrate vmc
      virtctl migrate vmc

      Actual results:

      Sometimes during the live migration the connectivity between the VMs will break (and as a result the packet_log.log in the client will contain less than the 40 expected responses).

      Expected results:

      Connectivity should not break.

      Additional info:

       

       

        1. log-client.pcap
          51 kB
          Miguel Duarte de Mora Barroso
        2. log-server.pcap
          44 kB
          Miguel Duarte de Mora Barroso

              phoracek@redhat.com Petr Horacek
              rh-ee-awax Anat Wax
              Yossi Segev Yossi Segev
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: