-
Feature
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
Strategic Product Work
-
False
-
-
False
-
OCPSTRAT-895Openshift LightSpeed GA
-
83% To Do, 17% In Progress, 0% Done
-
0
-
Program Call
Background
A high-quality RAG process focuses on three areas of optimization:
- Contextualized splitter function
- Embedding techniques and rich metadata
- Retrieval techniques
This Feature card is about point number 3. The idea is to move beyond Naive RAG into advanced retrieval by identifying the best combination of techniques.
Deliverables
The Feature should evaluate techniques like:
- RAG Evaluate retrieval performance and quality using embedding from filesystem vs vector database (chromadb)
- This may have a dependency or impact on OLS-120
- RAG Evaluate retrieval performance using Parent Document Retriever
- RAG Evaluate retrieval performance by using Re-Ranking
- RAG Evaluate retrieval performance by using query rewriting
- Reference paper: https://arxiv.org/pdf/2305.14283.pdf
- RAG Evaluate the quality of LLM answers using RAG summarization
- For this technique, the prompt sent to the LLM is augmented with a summary of the retrieved documents/chunks instead of sending the retrieved chunks.
- Evaluate any improvement in token utilization when using this technique.
- RAG Evaluate the quality of retrieval when using labeled topics metadata filtering
From the evaluations then, select the best candidate for:
- Create Advanced RAG chain definition
- Example of an advanced RAG chain:
- Rewrite prompt 3x (q1, q2, q3) > Retrieve top-3 per prompt > prioritize documents matched by questions > Ensemble > ReRank > Summarize > Augment Prompt > send prompt to LLM
- Example of an advanced RAG chain: