Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-28040

[2186372] Packet drops during the initial phase of VM live migration

    • Important
    • No

      Description of problem:

      When a virtual machine is getting live migrated the packet drops are observed on the inbound traffic to the VM immediately after the target virt-launcher starts. These packets are getting routed to the destination node when the migration is still running.

      Test: Started a ping to the VM during the VM migration from an external machine and the tcpdump was collected on both the source and the destination worker node and on the client machine.

      IP address of the VM: 10.74.130.192
      MAC: 02:6a:85:00:00:21

      ~~~

      1. ping -i 0.5 10.74.130.192

      64 bytes from 10.74.130.192: icmp_seq=11 ttl=64 time=0.375 ms
      64 bytes from 10.74.130.192: icmp_seq=12 ttl=64 time=0.624 ms
      64 bytes from 10.74.130.192: icmp_seq=13 ttl=64 time=0.299 ms
      64 bytes from 10.74.130.192: icmp_seq=14 ttl=64 time=63.5 ms

      < โ€” drops -->

      64 bytes from 10.74.130.192: icmp_seq=83 ttl=64 time=415 ms
      64 bytes from 10.74.130.192: icmp_seq=84 ttl=64 time=11.9 ms
      ~~~

      The lost packets in the client packet capture.

      ~~~

      1. TZ=UTC tshark -nr client.pcap -t ad icmp
        ....
        ....
        49 2023-04-12 04:26:11.881169 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=13/3328, ttl=64 (request in 48)
        50 2023-04-12 04:26:12.380186 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=14/3584, ttl=64
        51 2023-04-12 04:26:12.443677 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=14/3584, ttl=64 (request in 50)
        54 2023-04-12 04:26:12.880854 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=15/3840, ttl=64
        55 2023-04-12 04:26:13.380357 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=16/4096, ttl=64
        58 2023-04-12 04:26:13.881358 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=17/4352, ttl=64
        59 2023-04-12 04:26:14.380506 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=18/4608, ttl=64
        61 2023-04-12 04:26:14.880871 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=19/4864, ttl=64
        62 2023-04-12 04:26:15.380386 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=20/5120, ttl=64
        63 2023-04-12 04:26:15.880623 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=21/5376, ttl=64
        .......
        .......
        .......
        127 2023-04-12 04:26:46.402744 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=82/20992, ttl=64
        129 2023-04-12 04:26:47.316301 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=83/21248, ttl=64
        130 2023-04-12 04:26:47.318223 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=83/21248, ttl=64 (request in 129)
        131 2023-04-12 04:26:47.402238 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=84/21504, ttl=64
        132 2023-04-12 04:26:47.414150 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=84/21504, ttl=64 (request in 131)
        ~~~

      These packets actually reached the destination node although the migration was still running:

      ~~~
      The destination node, seq 15 - 82 reached here:

      1. TZ=UTC tshark -nr worker1_dst.pcap -t ad icmp
        3 2023-04-12 04:26:12.878671 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=15/3840, ttl=64
        4 2023-04-12 04:26:13.378150 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=16/4096, ttl=64
        7 2023-04-12 04:26:13.879223 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=17/4352, ttl=64
        8 2023-04-12 04:26:14.378311 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=18/4608, ttl=64
        10 2023-04-12 04:26:14.878612 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=19/4864, ttl=64
        11 2023-04-12 04:26:15.378144 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=20/5120, ttl=64
        12 2023-04-12 04:26:15.878365 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=21/5376, ttl=64
        13 2023-04-12 04:26:16.378102 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=22/5632, ttl=64
        ....
        ....
        ....

      Source Node, seq 15 - 82 missing

      48 2023-04-12 04:26:11.884612 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=13/3328, ttl=64
      49 2023-04-12 04:26:11.884731 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=13/3328, ttl=64 (request in 48)
      50 2023-04-12 04:26:12.389793 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=14/3584, ttl=64
      51 2023-04-12 04:26:12.447208 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=14/3584, ttl=64 (request in 50)

      < seq 15 - 82 missing >

      57 2023-04-12 04:26:47.320115 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=83/21248, ttl=64
      58 2023-04-12 04:26:47.320579 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=83/21248, ttl=64 (request in 57)
      59 2023-04-12 04:26:47.413168 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=84/21504, ttl=64
      60 2023-04-12 04:26:47.416380 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=84/21504, ttl=64 (request in 59)
      61 2023-04-12 04:26:47.907042 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=85/21760, ttl=64
      62 2023-04-12 04:26:47.907207 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=85/21760, ttl=64 (request in 61)
      63 2023-04-12 04:26:48.407203 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=86/22016, ttl=64

      ~~~

      The domain was getting migrated during this time, and hence the destination VM was in paused status:

      ~~~
      oc logs virt-launcher-rhel8-d58yi5fym85626yq-h76wk |grep "kubevirt domain status"

      {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Paused(3):StartingUp(11)","pos":"client.go:289","timestamp":"2023-04-12T04:26:15.582630Z"} {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Paused(3):Migration(2)","pos":"client.go:289","timestamp":"2023-04-12T04:26:16.244198Z"} {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Paused(3):Migration(2)","pos":"client.go:289","timestamp":"2023-04-12T04:28:30.799153Z"} {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Paused(3):Migration(2)","pos":"client.go:289","timestamp":"2023-04-12T04:28:30.832917Z"}

      < โ€” Migration completed -->

      {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Running(1):Unknown(2)","pos":"client.go:289","timestamp":"2023-04-12T04:28:30.883757Z"}

      ~~~

      It looks like the client is getting confused during the migration and routing traffic to the destination node while the migration is still going on because of the below ipv6 multicast packets originating from 02:6a:85:00:00:21( MAC address of the VM interface).

      ~~~
      48 2023-04-12 04:26:11.880892 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=13/3328, ttl=64
      49 2023-04-12 04:26:11.881169 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=13/3328, ttl=64 (request in 48)
      50 2023-04-12 04:26:12.380186 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=14/3584, ttl=64
      51 2023-04-12 04:26:12.443677 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=14/3584, ttl=64 (request in 50)
      52 2023-04-12 04:26:12.470232 :: โ†’ ff02::16 90 Multicast Listener Report Message v2 <<<
      53 2023-04-12 04:26:12.782278 :: โ†’ ff02::1:ff00:21 86 Neighbor Solicitation for fe80::6a:85ff:fe00:21 <<<
      54 2023-04-12 04:26:12.880854 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=15/3840, ttl=64 <<< ping routed to dest node
      55 2023-04-12 04:26:13.380357 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=16/4096, ttl=64
      56 2023-04-12 04:26:13.798396 fe80::6a:85ff:fe00:21 โ†’ ff02::16 90 Multicast Listener Report Message v2
      57 2023-04-12 04:26:13.798452 fe80::6a:85ff:fe00:21 โ†’ ff02::2 70 Router Solicitation from 02:6a:85:00:00:21
      58 2023-04-12 04:26:13.881358 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=17/4352, ttl=64
      59 2023-04-12 04:26:14.380506 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=18/4608, ttl=64
      60 2023-04-12 04:26:14.390271 :: โ†’ ff02::1:ff00:21 86 Neighbor Solicitation for 2620:52:0:4a80:6a:85ff:fe00:21
      61 2023-04-12 04:26:14.880871 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=19/4864, ttl=64
      62 2023-04-12 04:26:15.380386 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=20/5120, ttl=64
      63 2023-04-12 04:26:15.880623 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=21/5376, ttl=64

      Packet 52:

      52 2023-04-12 09:56:12.470232 :: ff02::16 90 Multicast Listener Report Message v252 2023-04-12 09:56:12.470232 :: ff02::16 90 Multicast Listener Report Message v2
      Frame 52: 90 bytes on wire (720 bits), 90 bytes captured (720 bits)
      Ethernet II, Src: 02:6a:85:00:00:21 (02:6a:85:00:00:21), Dst: IPv6mcast_16 (33:33:00:00:00:16) <<<
      Internet Protocol Version 6, Src: ::, Dst: ff02::16
      Internet Control Message Protocol v6

      Packet 53:

      53 2023-04-12 09:56:12.782278 :: ff02::1:ff00:21 86 Neighbor Solicitation for fe80::6a:85ff:fe00:21
      Frame 53: 86 bytes on wire (688 bits), 86 bytes captured (688 bits)
      Ethernet II, Src: 02:6a:85:00:00:21 (02:6a:85:00:00:21), Dst: IPv6mcast_ff:00:00:21 (33:33:ff:00:00:21)
      Internet Protocol Version 6, Src: ::, Dst: ff02::1:ff00:21
      Internet Control Message Protocol v6
      ~~~

      And these ipv6 multicast packets are originating from the destination virt-launcher pod and it seems to be when the virt-launcher pod does the ipv6 neighbor discovery. virt-launcher pod will have the VM's MAC before it creates the bridge and pass it to the VM.

      ~~~
      net1 is having 02:6a:85:00:00:21 before it creates the bridge.

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
      link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
      inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
      3: eth0@if198: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
      link/ether 0a:58:0a:83:00:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.131.0.154/23 brd 10.131.1.255 scope global eth0
      valid_lft forever preferred_lft forever
      inet6 fe80::d023:79ff:fe49:79d/64 scope link
      valid_lft forever preferred_lft forever
      4: net1@if199: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default <<<<
      link/ether 02:6a:85:00:00:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0 <<<<
      inet6 fe80::6a:85ff:fe00:21/64 scope link tentative
      valid_lft forever preferred_lft forever
      ~~~

      The packets are routed to the destination until the client does an ARP discovery again:

      ~~~
      TZ=UTC tshark -nr client.pcap -t ad

      125 2023-04-12 04:26:45.611473 18:66:da:9f:b3:b9 โ†’ 02:6a:85:00:00:21 42 Who has 10.74.130.192? Tell 10.74.128.144
      126 2023-04-12 04:26:45.903198 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=81/20736, ttl=64
      127 2023-04-12 04:26:46.402744 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=82/20992, ttl=64
      128 2023-04-12 04:26:47.316291 02:6a:85:00:00:21 โ†’ 18:66:da:9f:b3:b9 60 10.74.130.192 is at 02:6a:85:00:00:21
      129 2023-04-12 04:26:47.316301 10.74.128.144 โ†’ 10.74.130.192 98 Echo (ping) request id=0xa732, seq=83/21248, ttl=64
      130 2023-04-12 04:26:47.318223 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=83/21248, ttl=64 (request in 129)
      ~~~

      Once the migration completes, we can see RARP from the destination node as expected and then it routes the packets to the destination node:

      ~~~
      TZ=UTC tshark -nr client.pcap -t ad

      77 2023-04-12 04:28:30.835510 02:6a:85:00:00:21 โ†’ ff:ff:ff:ff:ff:ff 60 Who is 02:6a:85:00:00:21? Tell 02:6a:85:00:00:21
      78 2023-04-12 04:28:30.835539 02:6a:85:00:00:21 โ†’ ff:ff:ff:ff:ff:ff 60 Who is 02:6a:85:00:00:21? Tell 02:6a:85:00:00:21
      79 2023-04-12 04:28:30.835553 02:6a:85:00:00:21 โ†’ ff:ff:ff:ff:ff:ff 60 Who is 02:6a:85:00:00:21? Tell 02:6a:85:00:00:21
      80 2023-04-12 04:28:30.848542 02:6a:85:00:00:21 โ†’ 18:66:da:9f:b3:b9 42 Who has 10.74.128.144? Tell 10.74.130.192
      81 2023-04-12 04:28:30.848743 18:66:da:9f:b3:b9 โ†’ 02:6a:85:00:00:21 60 10.74.128.144 is at 18:66:da:9f:b3:b9
      83 2023-04-12 04:28:30.851173 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=21/5376, ttl=64 (request in 12)
      84 2023-04-12 04:28:30.851352 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=22/5632, ttl=64 (request in 13)
      85 2023-04-12 04:28:30.851454 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=23/5888, ttl=64 (request in 14)
      86 2023-04-12 04:28:30.851541 10.74.130.192 โ†’ 10.74.128.144 98 Echo (ping) reply id=0xa732, seq=24/6144, ttl=64 (request in 15)
      ~~~

      Version-Release number of selected component (if applicable):

      OpenShift Virtualization 4.12.0

      How reproducible:

      100%

      Steps to Reproduce:

      1. Start a ping to the VM during the VM migration. Use -i in ping to shorten the interval between packets.
      2. We can observe packet drops for a few seconds when the destination virt-launcher starts.

      Actual results:

      Packet drops during the initial phase of VM live migration. This drop is in addition to the drops during the final stage of migration where the source qemu is paused to move the last remaining memory pages to the destination qemu. So the user will experience more network downtime than other platforms which like RHV.

      Expected results:

      Additional info:

        1. vma.yaml
          2 kB
        2. vmb.yaml
          2 kB

            [CNV-28040] [2186372] Packet drops during the initial phase of VM live migration

            Hello rh-ee-hamir, the issue was fixed and backported 4.15. I created a clone of this, for the team to evaluate how could we backport it further https://issues.redhat.com/browse/CNV-48109.

            Petr Horacek added a comment - Hello rh-ee-hamir , the issue was fixed and backported 4.15. I created a clone of this, for the team to evaluate how could we backport it further https://issues.redhat.com/browse/CNV-48109 .

            CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt-tekton-tasks on branch cnv-4.14-rhel-9_upstream_3ef865032a1ed5c1254755396a3ce1ec:

            Updated US source to: 2dbff1e Merge pull request #484 from kubevirt/renovate/release-v0.15-go-golang.org/x/net-vulnerability

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt-tekton-tasks on branch cnv-4.14-rhel-9_ upstream _3ef865032a1ed5c1254755396a3ce1ec : Updated US source to: 2dbff1e Merge pull request #484 from kubevirt/renovate/release-v0.15-go-golang.org/x/net-vulnerability

            CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt-tekton-tasks on branch cnv-4.15-rhel-9_upstream_78648bb9c2fc1f0505111a9f25c0d7f8:

            Updated US source to: 87a28d8 Merge pull request #480 from kubevirt/renovate/release-v0.17-go-golang.org/x/net-vulnerability

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt-tekton-tasks on branch cnv-4.15-rhel-9_ upstream _78648bb9c2fc1f0505111a9f25c0d7f8 : Updated US source to: 87a28d8 Merge pull request #480 from kubevirt/renovate/release-v0.17-go-golang.org/x/net-vulnerability

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Moderate: OpenShift Virtualization 4.16.0 Images security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2024:4455

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Moderate: OpenShift Virtualization 4.16.0 Images security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:4455

            As the core of this bug is that packet loss was experienced on both the beginning of the migration (which shouldn't be) and then again in the final stage if the migration when the VM completes the migration (which is valid), I verified by ensuring that the packet loss happens only in one stage of the migration.

            Verified on:
            OCP 4.16.0-rc.5
            CNV 4.16.0 (brew.registry.redhat.io/rh-osbs/iib:750104)
            cnv-containernetworking-plugins-rhel9-container-v4.16.0-109 (where the fixed bridge CNI is included).

            Verified by following this scenario:
            1. Applied a node linux-bridge using this NodeNetworkConfigurationPolicy:

            apiVersion: nmstate.io/v1
            kind: NodeNetworkConfigurationPolicy
            metadata:
              name: migration-worker-1
            spec:
              desiredState:
                interfaces:
                - bridge:
                    options:
                      stp:
                        enabled: false
                    port:
                    - name: ens5
                  ipv4:
                    auto-dns: true
                    dhcp: false
                    enabled: false
                  ipv6:
                    auto-dns: true
                    autoconf: false
                    dhcp: false
                    enabled: false
                  name: migration-br
                  state: up
                  type: linux-bridge
            

            2. Applied the following NetworkAttachmentDefinition:

            apiVersion: k8s.cni.cncf.io/v1
            kind: NetworkAttachmentDefinition
            metadata:
              name: network-migration-nad
              namespace: migration-test-migration
            spec:
              config: '{"cniVersion": "0.3.1", "type": "cnv-bridge", "name": "migration-br", "bridge":
                "migration-br"}'
            

            3. Started 2 VMs, each with a secondary interface (attached to the node's linux bridge) and a unique IP address.
            The VM manifests are attached.
            vmb.yaml vma.yaml

            4. After both VMs were up and running, i initiated a ping from vma to the secondary interface of vmb:

            sudo ping -i 0.5 10.200.0.2
            

            5. While ping is running, I started sniffing for ICMP packets on vma:

            [fedora@vma-1719908643-3463576 ~]$ sudo tcpdump icmp -i eth1
            [ 8048.799340] virtio_net virtio1 eth1: entered promiscuous mode
            dropped privs to tcpdump
            tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
            listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
            10:38:48.357837 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15843, length 64
            10:38:48.358253 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15843, length 64
            10:38:48.861789 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15844, length 64
            10:38:48.862152 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15844, length 64
            10:38:49.371120 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15845, length 64
            10:38:49.371650 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15845, length 64
            ...
            

            6. I triggered migration of vmb.

            $ virtctl migrate vmb
            

            7. While the migration was executed until it full completion, I observed one batch of several consecutive packets loss:

            10:46:43.446155 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16786, length 64
            10:46:43.446757 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16786, length 64
            10:46:43.949899 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16787, length 64
            10:46:43.951824 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16787, length 64
            10:46:44.450648 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16788, length 64
            10:46:44.451107 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16788, length 64
            10:46:44.957940 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16789, length 64
            10:46:44.958370 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16789, length 64
            10:46:45.461899 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16790, length 64
            10:46:45.462311 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16790, length 64
            10:46:45.965983 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16791, length 64
            10:46:45.966331 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16791, length 64
            10:46:46.469916 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16792, length 64
            10:46:46.470286 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16792, length 64
            10:46:46.973937 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16793, length 64
            10:46:46.974372 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16793, length 64
            10:46:47.477927 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16794, length 64
            10:46:47.478404 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16794, length 64
            10:46:47.981998 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16795, length 64
            10:46:48.485954 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16796, length 64
            10:46:48.989927 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16797, length 64
            10:46:48.990325 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16797, length 64
            10:46:49.494041 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16798, length 64
            10:46:49.494591 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16798, length 64
            10:46:49.997999 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16799, length 64
            10:46:49.998502 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16799, length 64
            10:46:50.501891 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16800, length 64
            10:46:50.502282 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16800, length 64
            10:46:51.006036 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16801, length 64
            10:46:51.006604 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16801, length 64
            10:46:51.509922 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16802, length 64
            10:46:51.510301 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16802, length 64
            10:46:52.013887 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16803, length 64
            10:46:52.014288 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16803, length 64
            10:46:52.517909 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16804, length 64
            10:46:52.518736 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16804, length 64
            10:46:53.018667 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16805, length 64
            10:46:53.019260 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16805, length 64
            10:46:53.525946 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16806, length 64
            10:46:53.526401 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16806, length 64
            10:46:54.029960 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16807, length 64
            10:46:54.030314 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16807, length 64
            10:46:54.533937 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16808, length 64
            10:46:54.534317 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16808, length 64
            10:46:55.037953 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16809, length 64
            10:46:55.038371 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16809, length 64
            10:46:55.541931 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16810, length 64
            10:46:55.542277 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16810, length 64
            10:46:56.045930 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16811, length 64
            10:46:56.046313 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16811, length 64
            10:46:56.549918 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16812, length 64
            10:46:56.550406 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16812, length 64
            10:46:57.053915 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16813, length 64
            10:46:57.054259 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16813, length 64
            10:46:57.557883 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16814, length 64
            10:46:57.558208 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16814, length 64
            10:46:58.062015 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16815, length 64
            10:46:58.062460 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16815, length 64
            10:46:58.565915 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16816, length 64
            10:46:58.566447 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16816, length 64
            10:46:59.069996 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16817, length 64
            10:46:59.070393 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16817, length 64
            10:46:59.573999 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16818, length 64
            10:46:59.574415 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16818, length 64
            10:47:00.077946 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16819, length 64
            10:47:00.078290 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16819, length 64
            10:47:00.581896 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16820, length 64
            10:47:00.582318 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16820, length 64
            10:47:01.086129 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16821, length 64
            10:47:01.086522 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16821, length 64
            

            (packets 16795-7 got no reply).

            Yossi Segev added a comment - As the core of this bug is that packet loss was experienced on both the beginning of the migration (which shouldn't be) and then again in the final stage if the migration when the VM completes the migration (which is valid), I verified by ensuring that the packet loss happens only in one stage of the migration. Verified on: OCP 4.16.0-rc.5 CNV 4.16.0 (brew.registry.redhat.io/rh-osbs/iib:750104) cnv-containernetworking-plugins-rhel9-container-v4.16.0-109 (where the fixed bridge CNI is included). Verified by following this scenario: 1. Applied a node linux-bridge using this NodeNetworkConfigurationPolicy: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: migration-worker-1 spec: desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens5 ipv4: auto-dns: true dhcp: false enabled: false ipv6: auto-dns: true autoconf: false dhcp: false enabled: false name: migration-br state: up type: linux-bridge 2. Applied the following NetworkAttachmentDefinition: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: network-migration-nad namespace: migration-test-migration spec: config: '{ "cniVersion" : "0.3.1" , "type" : "cnv-bridge" , "name" : "migration-br" , "bridge" : "migration-br" }' 3. Started 2 VMs, each with a secondary interface (attached to the node's linux bridge) and a unique IP address. The VM manifests are attached. vmb.yaml vma.yaml 4. After both VMs were up and running, i initiated a ping from vma to the secondary interface of vmb: sudo ping -i 0.5 10.200.0.2 5. While ping is running, I started sniffing for ICMP packets on vma: [fedora@vma-1719908643-3463576 ~]$ sudo tcpdump icmp -i eth1 [ 8048.799340] virtio_net virtio1 eth1: entered promiscuous mode dropped privs to tcpdump tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes 10:38:48.357837 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15843, length 64 10:38:48.358253 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15843, length 64 10:38:48.861789 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15844, length 64 10:38:48.862152 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15844, length 64 10:38:49.371120 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 15845, length 64 10:38:49.371650 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 15845, length 64 ... 6. I triggered migration of vmb. $ virtctl migrate vmb 7. While the migration was executed until it full completion, I observed one batch of several consecutive packets loss: 10:46:43.446155 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16786, length 64 10:46:43.446757 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16786, length 64 10:46:43.949899 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16787, length 64 10:46:43.951824 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16787, length 64 10:46:44.450648 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16788, length 64 10:46:44.451107 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16788, length 64 10:46:44.957940 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16789, length 64 10:46:44.958370 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16789, length 64 10:46:45.461899 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16790, length 64 10:46:45.462311 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16790, length 64 10:46:45.965983 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16791, length 64 10:46:45.966331 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16791, length 64 10:46:46.469916 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16792, length 64 10:46:46.470286 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16792, length 64 10:46:46.973937 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16793, length 64 10:46:46.974372 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16793, length 64 10:46:47.477927 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16794, length 64 10:46:47.478404 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16794, length 64 10:46:47.981998 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16795, length 64 10:46:48.485954 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16796, length 64 10:46:48.989927 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16797, length 64 10:46:48.990325 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16797, length 64 10:46:49.494041 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16798, length 64 10:46:49.494591 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16798, length 64 10:46:49.997999 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16799, length 64 10:46:49.998502 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16799, length 64 10:46:50.501891 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16800, length 64 10:46:50.502282 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16800, length 64 10:46:51.006036 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16801, length 64 10:46:51.006604 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16801, length 64 10:46:51.509922 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16802, length 64 10:46:51.510301 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16802, length 64 10:46:52.013887 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16803, length 64 10:46:52.014288 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16803, length 64 10:46:52.517909 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16804, length 64 10:46:52.518736 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16804, length 64 10:46:53.018667 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16805, length 64 10:46:53.019260 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16805, length 64 10:46:53.525946 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16806, length 64 10:46:53.526401 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16806, length 64 10:46:54.029960 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16807, length 64 10:46:54.030314 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16807, length 64 10:46:54.533937 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16808, length 64 10:46:54.534317 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16808, length 64 10:46:55.037953 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16809, length 64 10:46:55.038371 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16809, length 64 10:46:55.541931 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16810, length 64 10:46:55.542277 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16810, length 64 10:46:56.045930 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16811, length 64 10:46:56.046313 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16811, length 64 10:46:56.549918 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16812, length 64 10:46:56.550406 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16812, length 64 10:46:57.053915 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16813, length 64 10:46:57.054259 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16813, length 64 10:46:57.557883 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16814, length 64 10:46:57.558208 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16814, length 64 10:46:58.062015 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16815, length 64 10:46:58.062460 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16815, length 64 10:46:58.565915 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16816, length 64 10:46:58.566447 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16816, length 64 10:46:59.069996 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16817, length 64 10:46:59.070393 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16817, length 64 10:46:59.573999 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16818, length 64 10:46:59.574415 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16818, length 64 10:47:00.077946 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16819, length 64 10:47:00.078290 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16819, length 64 10:47:00.581896 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16820, length 64 10:47:00.582318 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16820, length 64 10:47:01.086129 IP vma-1719908643-3463576 > 10.200.0.2: ICMP echo request, id 2, seq 16821, length 64 10:47:01.086522 IP 10.200.0.2 > vma-1719908643-3463576: ICMP echo reply, id 2, seq 16821, length 64 (packets 16795-7 got no reply).

            Or Mergi added a comment -

            The https://issues.redhat.com/browse/OCPBUGS-29888 bug is verified, latest bits should be available at OCP 4.16.0-0.nightly-2024-02-26-013420

            Or Mergi added a comment - The https://issues.redhat.com/browse/OCPBUGS-29888 bug is verified, latest bits should be available at OCP 4.16.0-0.nightly-2024-02-26-013420

            CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-4.15-rhel-9_upstream_ce66d4e81b381a187ec3fd9b5a3344c8:

            Updated US source to: 285eb7b Merge pull request #11383 from kubevirt-bot/cherry-pick-11069-to-release-1.1

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-4.15-rhel-9_ upstream _ce66d4e81b381a187ec3fd9b5a3344c8 : Updated US source to: 285eb7b Merge pull request #11383 from kubevirt-bot/cherry-pick-11069-to-release-1.1

            Or Mergi added a comment - - edited

            Following the original discussion at bz-2186372,

            The issue occurs when the interface is defined with an explicit MAC address (manually or automatically through KubeMacPool) on nodes that have IPv6 enabled.
            During the migration, frames may be forwarded to the destination node while the domain is active on the source and still not running at the destination.

            When the migration destination pod is created an IPv6 NS (Neighbor Solicitation)
            and NA (Neighbor Advertisement) are sent automatically by the kernel.
            The switches at the endpoints (e.g.: migration destination node) tables get updated and the traffic is forwarded to the migration destination before the migration is completed [1].

            The solution is to have the bridge CNI create the pod interface in "link-down" state [2], the IPv6 NS/NA packets are avoided, Kubevirt in turn, set the pod interface to "link-up"  [3].

            Kubevirts and Bridge CNI PRs are merged, I verified it on local env with latest main of bridge CNI and Kubevirt [4]

            [1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6
            [2] https://github.com/kubevirt/kubevirt/pull/11069
            [3] https://github.com/containernetworking/plugins/pull/997
            [4] https://github.com/kubevirt/kubevirt/pull/11069#issuecomment-1908510115

            Or Mergi added a comment - - edited Following the original discussion at bz-2186372 , The issue occurs when the interface is defined with an explicit MAC address (manually or automatically through KubeMacPool) on nodes that have IPv6 enabled. During the migration, frames may be forwarded to the destination node while the domain is active on the source and still not running at the destination. When the migration destination pod is created an IPv6 NS (Neighbor Solicitation) and NA (Neighbor Advertisement) are sent automatically by the kernel. The switches at the endpoints (e.g.: migration destination node) tables get updated and the traffic is forwarded to the migration destination before the migration is completed [1] . The solution is to have the bridge CNI create the pod interface in "link-down" state [2] , the IPv6 NS/NA packets are avoided, Kubevirt in turn, set the pod interface to "link-up"   [3] . Kubevirts and Bridge CNI PRs are merged, I verified it on local env with latest main of bridge CNI and Kubevirt [4] [1] https://bugzilla.redhat.com/show_bug.cgi?id=2186372#c6 [2] https://github.com/kubevirt/kubevirt/pull/11069 [3] https://github.com/containernetworking/plugins/pull/997 [4] https://github.com/kubevirt/kubevirt/pull/11069#issuecomment-1908510115

            CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-0.0-rhel-9_upstream_bb47cf9eb690396297cefb45bb372a7c:

            Updated US source to: d93e79a Merge pull request #11103 from akalenyu/match-err-notfound

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-0.0-rhel-9_ upstream _bb47cf9eb690396297cefb45bb372a7c : Updated US source to: d93e79a Merge pull request #11103 from akalenyu/match-err-notfound

            CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-0.0-rhel-9_upstream_f044f3c86727cd9b9ea8b1077a9f9c3c:

            Updated US source to: e6c7660 Merge pull request #11106 from andreabolognani/bazeldnf

            GitLab CEE Bot added a comment - CPaaS Service Account mentioned this issue in a merge request of cpaas-midstream / openshift-virtualization / kubevirt on branch cnv-0.0-rhel-9_ upstream _f044f3c86727cd9b9ea8b1077a9f9c3c : Updated US source to: e6c7660 Merge pull request #11106 from andreabolognani/bazeldnf

              ysegev@redhat.com Yossi Segev
              rhn-support-nashok Nijin Ashok
              Nir Rozen Nir Rozen
              Votes:
              0 Vote for this issue
              Watchers:
              14 Start watching this issue

                Created:
                Updated:
                Resolved: