• Icon: Initiative Initiative
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • None
    • Accelerator Enablement
    • None
    • False
    • Hide

      None

      Show
      None
    • False

      Build a new version of torch, torch-2.8 for CUDA, ROCm, and Gaudi. The CUDA work should
      begin prior to ROCm or Gaudi. The CUDA work should be staged in a branch in the builder
      repository for ROCm and Gaudi to consume. This way work can be done in parallel for all
      three vendor variants.

      We are initially targeting the versions listed here:

      We will minimally aim for:

      torch-2.8 [2]
      ROCM_VERSION=6.3 or newer [2]
      CUDA_VERSION=12.9 [3]
      NVIDIA_DRIVER_VERSION=580 or newer [3]
      NVIDIA_REQUIRE_CUDA=cuda==12.9 [3]
      triton-3.3 [4]
      vllm-version-unknown [5]

      [1] https://en.wikipedia.org/wiki/CUDA#GPUs_supported
      [2] https://pytorch.org/get-started/locally/
      [3] https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#
      [4] https://pytorch.org/blog/pytorch-2-7/
      [5] https://github.com/vllm-project/vllm/blob/main/pyproject.toml (As of writing this torch-2.8 is not supported by vllm, https://github.com/vllm-project/vllm/pull/19507 would add it)

      1. Assignee (email address)
      2. Components (case sensitive)
      3. Affects Versions
      4. Fix Version
      5. Priority
        #
      6. The fields below may not be available on all projects.
        #
      7. Contributors (comma separated list of email addresses)
      8. Release Blocker (default to None)
      9. Severity
      10. Epic Link
      11. Parent Link
      12. Is Blocked By
      13. Blocks

              Unassigned Unassigned
              prarit@redhat.com Prarit Bhargava
              Frank's Team
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: