-
Epic
-
Resolution: Obsolete
-
Undefined
-
None
-
None
Build a new version of torch, torch-2.8 for CUDA, ROCm, and Gaudi. The CUDA work should
begin prior to ROCm or Gaudi. The CUDA work should be staged in a branch in the builder
repository for ROCm and Gaudi to consume. This way work can be done in parallel for all
three vendor variants.
We are initially targeting the versions listed in the parent Feature:
We will minimally aim for:
torch-2.8 [2]
ROCM_VERSION=TBD [2]
CUDA_VERSION=TBD [3]
NVIDIA_DRIVER_VERSION=TBD [3]
NVIDIA_REQUIRE_CUDA=cuda>=TBD [3]
triton-3.4.0 [4]
vllm TBD [5]
[1] https://en.wikipedia.org/wiki/CUDA#GPUs_supported
[2] https://pytorch.org/get-started/locally/
[3] https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#
[4] https://pytorch.org/blog/pytorch-2-7/
[5] https://github.com/vllm-project/vllm/blob/main/pyproject.toml
Intel's Gaudi does not currently support torch-2.8 according to
https://docs.habana.ai/en/latest/Support_Matrix/Support_Matrix.html
and as such, no Gaudi work must be completed for this Epic.
- clones
-
AIPCC-1537 build torch-2.7
-
- Closed
-