-
Spike
-
Resolution: Done
-
Undefined
-
None
-
None
We've been using https://block.github.io/goose/docs/getting-started as a coding assistant to add new features to https://github.com/jlebon/coreos-pipeline-assistant.git
ramalama was just added as an option for being able to use it as a backend for goose. we should experiment (for anyone having appropriate GPU hardware) with setting up ramalama locally to run a llm/model for us and configure goose to connect to it and see how it works (does it work well, what are the rough edges, etc). This can then be shared back with the team.