• Icon: Story Story
    • Resolution: Unresolved
    • Icon: Major Major
    • 1.10.0
    • None
    • AI, Lightspeed
    • None
    • RHDH AI Sprint 3288

      Story

      As a RHDH User,, I want to be able to talk to the notebook session, and the AI to refuse to answer if no documents are present, and strictly answer only based on uploaded content when they are present, So that I avoid hallucinations and ensure the advice is grounded solely in the provided context.

      Background

      A key risk with LLMs is hallucination or answering from general knowledge when the user intends to analyze specific data. To ensure the "AI Notebook" acts as a true document analyzer, we need strict logical guardrails that prevent the model from using outside knowledge or answering when no data exists. Users should make use of custom safety shield feature in lightspeed stack and llama stack.

      Dependencies and Blockers

      QE impacted work

      Documentation impacted work

      Acceptance Criteria

      upstream documentation updates (design docs, release notes etc)

      Technical enablement / Demo

              rh-ee-lyoon Lucas Yoon
              rh-ee-lyoon Lucas Yoon
              RHDH AI
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: