Uploaded image for project: 'AI Platform Core Components'
  1. AI Platform Core Components
  2. AIPCC-441

[builder] llama_cpp_python.yaml does not pass global CMAKE_ARGS

    • False
    • Hide

      None

      Show
      None
    • False

      The builder config file https://gitlab.com/redhat/rhel-ai/wheels/builder/-/blob/main/overrides/settings/llama_cpp_python.yaml does not pass package global CMAKE_ARGS to CUDA and ROCm variants.

      Something like

       

      env:
        CMAKE_ARGS: >-
          -DLLAMA_NATIVE=off
          -DGGML_NATIVE=off
      variants:
        cuda-ubi9:
          env:
            CMAKE_ARGS: >-
              ${CMAKE_ARGS}
              -DGGML_CUDA=on
              ...

      should do the trick

       

              Unassigned Unassigned
              cheimes@redhat.com Christian Heimes
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: