-
Feature Request
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
Incidents & Support
Goal
Connect VMs through Infiniband L2 (instead of the more commonplace Ethernet)
User Stories
- As a Developer I want to connect my VMs to a high-speed low-latency Infiniband network just as I can connect my pods to it.
Notes
The request from CNV-16921 (thank you Germano):
Currently the sr-iov operator only allows "linkType: ib" with "deviceType: netdevice", but 'deviceType: vfio-pci' is required for PCI passthrough to KVM VMs.
Also, setting linkType to 'ib' forces isRdma, which is incompatible with vfio-pci.
For SR-IOV VF Passthrough to work, the only valid combination is this:
deviceType: vfio-pci
isRdma: false
linkType: eth
However, this results in a VF configured with ethernet linktype, not infiniband, here:
https://github.com/openshift/sriov-network-operator/blob/master/pkg/plugins/mellanox/mellanox_plugin.go#L191
Showing up like this inside the Guest:
CA 'mlx5_0'
CA type: MT4124
Number of ports: 1
Firmware version: 20.28.4512
Hardware version: 0
Node GUID: <removed>
System image GUID: <removed>
Port 1:
State: Active
Physical state: LinkUp
Rate: 40
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x00010000
Port GUID: <removed>
Link layer: Ethernet <-----
- is triggering
-
VIRTSTRAT-534 Support InfiniBand in VMs
-
- New
-