• Build Torch and other packages with NumPy 2.x
    • False
    • Hide

      None

      Show
      None
    • False
    • Done
    • 0% To Do, 0% In Progress, 100% Done
    • Proposed

      We are patching some packages to force "numpy<2.0" at build time. This version cap is now causing problems. Packages like Torch do not work when they are compiled with NumPy 1.x but the environment has NumPy 2.x. The other way is supported, though.

      Pinning to "numpy <2.0" globally does not work, because "numba >= 0.60" requires "numpy >= 2.0" to build. "numba < 0.60" requires an older "llvmlite" to build, which in return requires LLVM 14. Our builder image now has LLVM 15. We need to rebuild Torch and other packages with NumPy 2.x.

      I think RHELAI is getting lucky, because we also patch xformers to have an installation requirement on "numpy < 2.0".

      A module that was compiled using NumPy 1.x cannot be run in
      NumPy 2.2.5 as it may crash. To support both 1.x and 2.x
      versions of NumPy, modules must be compiled with NumPy 2.0.
      Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
      If you are a user of the module, the easiest solution will be to
      downgrade to 'numpy<2' or try to upgrade the affected module.
      We expect that some modules will need time to support NumPy 2.
      Traceback (most recent call last):  File "/opt/app-root/bin/vllm", line 5, in <module>
          from vllm.entrypoints.cli.main import main
        File "/opt/app-root/lib64/python3.11/site-packages/vllm/__init__.py", line 10, in <module>
          import vllm.env_override  # isort:skip  # noqa: F401
        File "/opt/app-root/lib64/python3.11/site-packages/vllm/env_override.py", line 4, in <module>
          import torch
        File "/opt/app-root/lib64/python3.11/site-packages/torch/__init__.py", line 2222, in <module>
          from torch import quantization as quantization  # usort: skip
        File "/opt/app-root/lib64/python3.11/site-packages/torch/quantization/__init__.py", line 2, in <module>
          from .fake_quantize import *  # noqa: F403
        File "/opt/app-root/lib64/python3.11/site-packages/torch/quantization/fake_quantize.py", line 10, in <module>
          from torch.ao.quantization.fake_quantize import (
        File "/opt/app-root/lib64/python3.11/site-packages/torch/ao/quantization/__init__.py", line 12, in <module>
          from .pt2e._numeric_debugger import (  # noqa: F401
        File "/opt/app-root/lib64/python3.11/site-packages/torch/ao/quantization/pt2e/_numeric_debugger.py", line 9, in <module>
          from torch.export import ExportedProgram
        File "/opt/app-root/lib64/python3.11/site-packages/torch/export/__init__.py", line 68, in <module>
          from .decomp_utils import CustomDecompTable
        File "/opt/app-root/lib64/python3.11/site-packages/torch/export/decomp_utils.py", line 5, in <module>
          from torch._export.utils import (
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_export/__init__.py", line 48, in <module>
          from .wrappers import _wrap_submodules
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_export/wrappers.py", line 7, in <module>
          from torch._higher_order_ops.strict_mode import strict_mode
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_higher_order_ops/__init__.py", line 1, in <module>
          from torch._higher_order_ops.cond import cond
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_higher_order_ops/cond.py", line 9, in <module>
          import torch._subclasses.functional_tensor
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 45, in <module>
          class FunctionalTensor(torch.Tensor):
        File "/opt/app-root/lib64/python3.11/site-packages/torch/_subclasses/functional_tensor.py", line 275, in FunctionalTensor
          cpu = _conversion_method_template(device=torch.device("cpu"))
      /opt/app-root/lib64/python3.11/site-packages/torch/_subclasses/functional_tensor.py:275: UserWarning: Failed to initialize NumPy: 
      A module that was compiled using NumPy 1.x cannot be run in
      NumPy 2.2.5 as it may crash. To support both 1.x and 2.x
      versions of NumPy, modules must be compiled with NumPy 2.0.
      Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
      If you are a user of the module, the easiest solution will be to
      downgrade to 'numpy<2' or try to upgrade the affected module.
      We expect that some modules will need time to support NumPy 2.

              Unassigned Unassigned
              cheimes@redhat.com Christian Heimes
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: