-
Initiative
-
Resolution: Unresolved
-
Critical
-
None
-
None
Build a new version of torch, torch-2.8 for CUDA, ROCm,
Gaudi, Spyre, and Google TPU. The CUDA work should begin prior to
the other architectures. The CUDA work should be staged in a branch in the
builder repository for ROCm and Gaudi to consume. This way work
can be done in parallel for all three vendor variants.
We are initially targeting the versions listed here:
torch-2.8 [2]
ROCM_VERSION=6.4.3 [2]
CUDA_VERSION=12.9 [3]
NVIDIA_DRIVER_VERSION=575.57.08 [3]
NVIDIA_REQUIRE_CUDA=cuda==12.9 [3]
triton-3.4 [4]
vllm-0.10.2 [5]
xformers==0.0.31 (for now)
[1] https://en.wikipedia.org/wiki/CUDA#GPUs_supported
[2] https://pytorch.org/get-started/locally/
[3] https://docs.nvidia.com/cuda/archive/12.9.1/cuda-toolkit-release-notes/index.html
[4] https://pytorch.org/blog/pytorch-2-7/
[5] https://github.com/vllm-project/vllm/blob/main/pyproject.toml
- is blocked by
-
AIPCC-1138 Update Platform to RHEL 9.6
-
- Closed
-
- is duplicated by
-
AIPCC-1952 Upgrade platform to support torch 2.8
-
- Closed
-
-
AIPCC-3784 build torch-2.8
-
- Closed
-
- relates to
-
AIPCC-5449 [QE] Torch 2.8 Test Plan Preparation
-
- Closed
-
- links to