-
Task
-
Resolution: Unresolved
-
Undefined
-
None
-
None
Task
- Currently the ingest openai uses two approaches
- Using the file_search tool with vector store ids when creating a response with llama stack client https://github.com/thepetk/langgraph-agents/blob/ebde9b71f9801642ee98ce753633ebfa78fedeae/examples/langchain-langgraph/ingest_openai.py#L422-L454
- Listing and quering the vector store directly and passing the context, prompt without the file search tool to the llama stack client https://github.com/thepetk/langgraph-agents/blob/ebde9b71f9801642ee98ce753633ebfa78fedeae/examples/langchain-langgraph/ingest_openai.py#L457-L523
- Investigate if we can find an option c, which can perform better than the above two ideas
Background
Dependencies and Blockers
QE impacted work
Documentation impacted work
Acceptance Criteria