-
Story
-
Resolution: Unresolved
-
Major
-
None
-
None
Story
As a RHDH User,, I want to be able to talk to the notebook session, and the AI to refuse to answer if no documents are present, and strictly answer only based on uploaded content when they are present, So that I avoid hallucinations and ensure the advice is grounded solely in the provided context.
Background
A key risk with LLMs is hallucination or answering from general knowledge when the user intends to analyze specific data. To ensure the "AI Notebook" acts as a true document analyzer, we need strict logical guardrails that prevent the model from using outside knowledge or answering when no data exists. Users should make use of custom safety shield feature in lightspeed stack and llama stack.
Dependencies and Blockers
QE impacted work
Documentation impacted work
Acceptance Criteria
upstream documentation updates (design docs, release notes etc)
Technical enablement / Demo