• False
    • Hide

      None

      Show
      None
    • False
    • None
    • Unset

      Building on the initial code that rh-ee-jbarea wrote.

      Switched to use python-openai instead of llamastack after struggling to get llamastack working with vLLM (also, we don't need a whole stack for this)

              bsquizza@redhat.com Brandon Squizzato
              bsquizza@redhat.com Brandon Squizzato
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: