-
Epic
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
Graduate Flavor based PCI in Placement feature to full support
-
False
-
-
False
-
Not Selected
-
Proposed
-
Proposed
-
To Do
-
Proposed
-
Proposed
-
0% To Do, 100% In Progress, 0% Done
-
-
The feature is available in nova Antelope and therefore available in 18.0 as well. During 18.0-GA we marked the feature as tech preview as QE and Doc was missing. See OSPRH-19.
Now lets graduate this feature to full supported.
Note that this does not mean the feature will be enabled by default. We keep it disabled by default (as upstream).
Use cases:
- I as an admin want to see in the Placement API what PCI devices nova considers available, free, allocated on a given compute host.
- I as an admin want to be able to reserve PCI devices in Placement so that even though they are configured to be available to nova, nova will not use them for VMs. This enables an external tool to do configuration on those devices while they are reserved. See for example NVMe cleanup: OSPRH-13064.
- I as an admin want to be able to group PCI devices by user defined resource classes and request those resource classes via the PCI alias in the Flavor. This enables a configuration where a subset of PCI devices from the same PCI vendor_id, and product_id can be consumed together. See for example: GPUs with Infinity fabric
OSPRH-12329, or GPUs with NVLink OSPRH-10828
References
- relates to
-
OSPRH-12329 Suggest & document intermediate story for GPU placement
- Closed