-
Epic
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
None
-
None
-
Nodepool capacity blocks support for GPU reservations for AWS
-
False
-
None
-
False
-
Not Selected
-
To Do
-
OCPSTRAT-1590 - HCP Capacity Blocks Support for GPU Reservations
-
OCPSTRAT-1590HCP Capacity Blocks Support for GPU Reservations
-
100% To Do, 0% In Progress, 0% Done
-
XS
-
0
-
0
-
0
Goal
- Allow specifying AWS Capacity blocks to ensure access to reserved EC2 instances requiring specialized hardware
Why is this important?
- AI/ML workloads capable instances get significant discounts when using capacity blocks
Scenarios
- Hosted Cluster user on ROSA/self-managed-aws wants to run machine learning workloads with GPU acceleration
Acceptance Criteria
- Dev - Has a valid and documented implementation
- CI - MUST be running successfully with tests automated
- QE - covered in Polarion test plan and tests implemented
Dependencies (internal and external)
- CAPA support
Previous Work (Optional):
Open questions:
- Should the capacity block be created on demand or just specified?
Done Checklist
- CI - CI is running, tests are automated and merged.
- DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
- QE - Test plans in Polarion: <link or reference to Polarion>
- QE - Automated tests merged: <link or reference to automated tests>
- DOC - Downstream documentation merged: <link to meaningful PR>
- is blocked by
-
OCPSTRAT-1791 Support AWS Capacity Blocks for ML in MAPI/CAPI
- In Progress