-
Epic
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
Figure out LLVM dependencies
-
False
-
-
False
-
To Do
-
Our attempt to build Tensorflow from source failed because of our version of CLANG:
ERROR: /opt/app-root/src/.cache/bazel/_bazel_root/a38dfe1b75de8af9dad503b5a49950eb/external/local_xla/xla/stream_executor/cuda/BUILD:2064:14: Compiling xla/stream_executor/cuda/cub_sort_kernel_cuda_impl.cu.cc [for tool] failed: (Exit 1): clang-19 failed: error executing CppCompile command (from target @@local_xla//xla/stream_executor/cuda:cub_sort_kernel_cuda_impl_f16) /usr/bin/clang-19 -MD -MF bazel-out/k8-opt-exec-ST-0465588ec812/bin/external/local_xla/xla/stream_executor/cuda/_objs/cub_sort_kernel_cuda_impl_f16/cub_sort_kernel_cuda_impl.cu.d ... (remaining 169 arguments skipped) clang-19: error: unsupported CUDA gpu architecture: sm_100 clang-19: error: unsupported CUDA gpu architecture: sm_120 Target //tensorflow/tools/pip_package:wheel failed to build
We'd need CLANG 20 probably but we'd need to figure out whether the compiled binaries have runtime dependencies on LLVM.
We have an agreement with the LLVM team that we not ship LLVM bits to customers.
So we'd need to try to install clang20-20.1.8-2.1.el9ai.x86_64.rpm which is already in our repos, that was built for LLVM20 for Rust; see if TensorFlow could be built, install the wheels on the AIPCC app image and run tests to see if runtime works fine without any new dependencies.