Uploaded image for project: 'AI Platform Core Components'
  1. AI Platform Core Components
  2. AIPCC-5938

Floating point exception (core dumped) in `convolution_backward`

    • Icon: Story Story
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • None
    • PyTorch
    • None
    • PyTorch Sprint 15, PyTorch Sprint 16, PyTorch Sprint 17

          1. 🐛 Describe the bug

      Under specific inputs, `convolution_backward` triggered a crash.
      ```python
      import torch

      grad_output = torch.full((5, 4, 3,), 0.5, dtype=torch.double)
      input = torch.full((2,5,8,), 0.5, dtype=torch.double)
      weight = torch.full((2,5,8,), 1, dtype=torch.double)
      bias_sizes = [1]
      stride = [65534]
      padding = [1]
      dilation = [536870912]
      transposed = True
      output_padding = [1879048192]
      groups = 0
      output_mask = [0,1,1]
      torch.ops.aten.convolution_backward(grad_output, input, weight, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, output_mask)
      ```
      Output
      ```
      Floating point exception (core dumped)
      ```
      ASAN Output
      ```
      =================================================================
      ==4097879==ERROR: AddressSanitizer: FPE on unknown address 0x7f245990af66 (pc 0x7f245990af66 bp 0x7ffcce6b6ed0 sp 0x7ffcce6b66b0 T0)
      #0 0x7f245990af66 in check_shape_forward<long int> /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:678
      #1 0x7f245990c8dc in check_shape_backward<long int> /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:742
      #2 0x7f2459900bef in at::native::convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:2020
      #3 0x7f245e584d2b in wrapper_CompositeExplicitAutograd__convolution_backward /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:1763
      #4 0x7f245e8e0cee in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
      #5 0x7f245e8e0cee in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
      #6 0x7f245dc45bfa in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, bool&&, c10::ArrayRef<c10::SymInt>&&, c10::SymInt&&, std::array<bool, 3ul>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
      #7 0x7f245dad3adb in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
      #8 0x7f245dad3adb in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::Dispatcher::redispatch<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
      #9 0x7f245d81bc54 in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
      #10 0x7f245d81bc54 in at::_ops::convolution_backward::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:1824
      #11 0x7f24669a1251 in at::redispatch::convolution_backward_symint(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:1907
      #12 0x7f24666e36aa in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:7715
      #13 0x7f24666e501c in convolution_backward /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:7716
      #14 0x7f24668f42f7 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
      #15 0x7f24668f42f7 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
      #16 0x7f2466971891 in call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3>), torch::autograd::VariableType::(anonymous namespace)::convolution_backward>, std::tuple<at::Tensor, at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > >, false, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:506
      #17 0x7f2466942a7d in call_functor_with_args_from_stack<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3>), torch::autograd::VariableType::(anonymous namespace)::convolution_backward>, std::tuple<at::Tensor, at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > >, false> /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:518
      #18 0x7f24668f4607 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:584
      #19 0x7f24a0fd5bec in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/BoxedKernel_impl.h:41
      #20 0x7f24a0fd86aa in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:46
      #21 0x7f24a27720f1 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:747
      #22 0x7f24589be912 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:461
      #23 0x7f2468e5a610 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:465
      #24 0x7f246ad0aa16 in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/runtime/register_c10_ops.cpp:13
      #25 0x7f246ad0d9b0 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:61
      #26 0x7f246ad0d78f in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:111
      #27 0x7f246ad0d448 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
      #28 0x7f24a1bd6020 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/include/c++/11/bits/std_function.h:590
      #29 0x7f24a1bc7416 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /mnt/pytorch-2.5.0/aten/src/ATen/core/stack.h:42
      #30 0x7f24a1bb9b45 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args const&, pybind11::kwargs const&, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:813
      #31 0x7f24a1bbb4e0 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:892
      #32 0x7f24a16e06ba in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/python/init.cpp:1771
      #33 0x7f24a17fce7e in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&, 0, 1, pybind11::detail::void_type> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1631
      #34 0x7f24a17ca944 in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1600
      #35 0x7f24a17489b4 in operator() /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:296
      #36 0x7f24a1748bc1 in _FUN /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:267
      #37 0x7f24a026bc89 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:987
      #38 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
      #39 0x6153f4 in _PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:361
      #40 0x54df40 in PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:373
      #41 0x54df40 in PyCFunction_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:381
      #42 0x54df40 in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1355
      #43 0x606f14 in _PyEval_EvalFrame /usr/local/src/conda/python-3.13.0/Include/internal/pycore_ceval.h:119
      #44 0x606f14 in _PyEval_Vector /usr/local/src/conda/python-3.13.0/Python/ceval.c:1806
      #45 0x606f14 in _PyFunction_Vectorcall /usr/local/src/conda/python-3.13.0/Objects/call.c:413
      #46 0x606f14 in _PyObject_VectorcallDictTstate /usr/local/src/conda/python-3.13.0/Objects/call.c:135
      #47 0x65dc9c in _PyObject_Call_Prepend /usr/local/src/conda/python-3.13.0/Objects/call.c:504
      #48 0x65dc9c in slot_tp_call /usr/local/src/conda/python-3.13.0/Objects/typeobject.c:9537
      #49 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
      #50 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
      #51 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
      #52 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
      #53 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
      #54 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
      #55 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
      #56 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
      #57 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
      #58 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
      #59 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
      #60 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
      #61 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
      #62 0x7f24a936ad8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
      #63 0x7f24a936ae3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
      #64 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)

      AddressSanitizer can not provide additional info.
      SUMMARY: AddressSanitizer: FPE /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:678 in check_shape_forward<long int>
      ==4097879==ABORTING
      ```

          1. Versions

      PyTorch version: 2.5.0a0+git32f585d
      Is debug build: False
      CUDA used to build PyTorch: None
      ROCM used to build PyTorch: N/A

      OS: Ubuntu 22.04.4 LTS (x86_64)
      GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
      Clang version: Could not collect
      CMake version: version 3.22.1
      Libc version: glibc-2.35

      Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
      Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
      Is CUDA available: False
      CUDA runtime version: No CUDA
      CUDA_MODULE_LOADING set to: N/A
      GPU models and configuration: No CUDA
      Nvidia driver version: No CUDA
      cuDNN version: No CUDA
      HIP runtime version: N/A
      MIOpen runtime version: N/A
      Is XNNPACK available: True

      CPU:
      Architecture: x86_64
      CPU op-mode(s): 32-bit, 64-bit
      Address sizes: 46 bits physical, 48 bits virtual
      Byte Order: Little Endian
      CPU(s): 80
      On-line CPU(s) list: 0-79
      Vendor ID: GenuineIntel
      Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
      CPU family: 6
      Model: 85
      Thread(s) per core: 2
      Core(s) per socket: 20
      Socket(s): 2
      Stepping: 7
      CPU max MHz: 4000.0000
      CPU min MHz: 800.0000
      BogoMIPS: 4200.00
      Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
      Virtualization: VT-x
      L1d cache: 1.3 MiB (40 instances)
      L1i cache: 1.3 MiB (40 instances)
      L2 cache: 40 MiB (40 instances)
      L3 cache: 55 MiB (2 instances)
      NUMA node(s): 2
      NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
      NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
      Vulnerability Gather data sampling: Mitigation; Microcode
      Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
      Vulnerability L1tf: Not affected
      Vulnerability Mds: Not affected
      Vulnerability Meltdown: Not affected
      Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
      Vulnerability Reg file data sampling: Not affected
      Vulnerability Retbleed: Mitigation; Enhanced IBRS
      Vulnerability Spec rstack overflow: Not affected
      Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
      Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
      Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
      Vulnerability Srbds: Not affected
      Vulnerability Tsx async abort: Mitigation; TSX disabled

      Versions of relevant libraries:
      [pip3] numpy==2.1.3
      [pip3] torch==2.5.0a0+git32f585d
      [conda] numpy 2.1.3 pypi_0 pypi
      [conda] torch 2.5.0a0+git32f585d pypi_0 pypi

              rh-ee-visgoyal Vishal Goyal
              rh-ee-visgoyal Vishal Goyal
              PyTorch Core
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: