• BU Product Work
    • False
    • Hide

      None

      Show
      None
    • False
    • OCPSTRAT-895Openshift LightSpeed GA
    • 100% To Do, 0% In Progress, 0% Done
    • 0

      Background
      A high-quality RAG process focuses on three areas of optimization:

      1. Contextualized splitter function
      2. Embedding techniques and rich metadata
      3. Retrieval techniques 

      This Feature card is about the point number 2.
       
      Deliverables:

      • Make text embedding model configurable
        • The intent is to have the ability to replace BAAI/bge-* models with an Apache 2.0 sentence-transformer in Huggingface:
          • Candidates https://www.sbert.net/docs/pretrained_models.html
          • Preferred candidates:
            • sentence-transformers/all-mpnet-base-v2 (768 dimensions)
              • Evaluate this model as the preferred alternate default text embedding model to replace the BAAI/bge-* models
            • sentence-transformers/all-distilroberta-v1 (768 dimensions)
            • sentence-transformers/multi-qa-distilbert-cos-v1 (768 dimensions)
            • sentence-transformers/all-MiniLM-L12-v2 (384 dimensions)
          • A replacement default text embedding model should support the following scoring functions:
            • dot-product
            • cosine-similarity
            • Euclidean distance
      • Evaluate the quality of the retrieved document and retrieval scoring function across the "all-*" text embedding models.
      • Update RAG embedding pipeline to use the new preferred text embedding model
        • Note: The final text embedding model must be approved for redistribution as part of OLS by the legal team

            gausingh@redhat.com Gaurav Singh
            wcabanba@redhat.com William Caban
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated: