-
Task
-
Resolution: Unresolved
-
Major
-
3.12.0.GA
-
None
-
False
-
None
-
False
-
-
Inspired by
- https://www.opensourcerers.org/2023/11/06/a-personal-ai-assistant-for-developers-that-doesnt-phone-home/
- https://github.com/sa-mw-dach/dev_demos/blob/main/demos/05_ai_assistant/devspaces/devfile.yaml
caveat:
ollama: A container based on the Ollama container image that comprises the Ollama web server and is additonally configured to leverage GPUs by setting nvidia.com/gpu: 1 in the container's resource request. Due to that configuration in the devfile, the ollama container (and therewith the entire pod) is being deployed on an OpenShift worker node that hosts a GPU, which significantly accelerates the inference step of the local LLM and hence tremendously improves the performance of the personal AI assistant for developers.
No airgap, s390x support