Uploaded image for project: 'OpenShift Virtualization'
  1. OpenShift Virtualization
  2. CNV-76299

Provide a proxy between the cluster LM network and the cross cluster LM network

XMLWordPrintable

    • cross-cluster-live-migration-network-proxy
    • Product / Portfolio Work
    • 77
    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • To Do
    • 100% To Do, 0% In Progress, 0% Done

      Goal

      Reduce the number of IP addresses that have to be allocated on the cross cluster Live Migration network. In environments without DHCP it is difficult to create network-attachment-definitions that allow assigning static IP addresses to pods in daemonsets/deployments. This epic is about creating a proxy that is part of the synchronization controller, so the number of need IP addresses on the cross cluster live migration network is reduced to the number of synchronization controllers in the cluster.

      User Stories

      • As a network admin of an OpenShift Virtualization cluster, I want to be able to configure a network-attachment-definition that doesn't require automation to figure out the exclusion ranges. It is acceptable to exclude everything but 2 or 3 IP addresses in this NAD.
      • As an admin of an OpenShift Virtualization cluster, I want to be able to define both an 'in-cluster' and 'cross cluster' live migration network. The in cluster network can be the pod network or any other network that is only accessible inside the cluster.

      Non-Requirements

      Notes

      • The idea behind the proxy is that from a cluster perspective the proxy, is just another virt-handler. In that it opens ports it is listening on to do the actual migration. The novel part is that those ports then get mapped to a proxy on another cluster which then reverses the process. And in the end the virt-handlers in each cluster are still talking to each other.
      • There is a potential that the proxy becomes a bottle neck when doing many concurrent migrations. This is mitigated by a default max of 5 concurrent migrations per cluster. We should ensure the synchronization-controller/proxy has enough cpu/memory available to efficiently do the migration.
      • We will have to measure if there is a performance impact since we are inserting 2 networks in the process. One in cluster LM network per cluster. If the VM and the proxy are on different nodes, we need to transfer that data over the network a few times.
      • We should consider if we want to 'terminate' the SSL connection between the virt-handler and the proxy. This might remove the need for exchanging KubeVirt CAs between clusters. The only exchange that would remain is between the proxies instead.
      • This will likely create a few minor APIs changes in KubeVirt (in cluster LM vs cross cluster LM network for instance) and thus will require a VEP most likely.

              phoracek@redhat.com Petr Horacek
              rhn-support-awels Alexander Wels
              Yoss Segev Yoss Segev
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: