-
Feature
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
False
-
-
False
-
Not Selected
Feature Overview (mandatory - Complete while in New status)
An elevator pitch (value statement) that describes the Feature in a clear, concise way. ie: Executive Summary of the user goal or problem that is being solved, why does this matter to the user? The “What & Why”...
Requirements:
- Know the default providers (and how to invoke them?)
- Ability to invoke the instructlab providers with simple commands ‘data generate’, ‘train’ (these are only examples) etc and trigger respective workflows
- As a user, I will be able pass my custom source documents and qna.yaml files
- These will be mixed (on the server side) with precomputed datasets
- As a user, I want to be able to run InstructLab’s SDG “generate” agentic pipeline workflow with the ilab sdg library as a provider through LLS server implementation.
- As a user, I want to be able to serve and chat with supported models with a supported provider
- As a user, I want to be able to trigger training/eval workflows on the server with the ilab training/eval libraries as a providers.
Goals (mandatory - Complete while in New status)
Provide high-level goal statement, providing user context and expected user outcome(s) for this Feature
- Who benefits from this Feature, and how?
- What is the difference between today’s current state and a world with this Feature?
Proposed flow:
Step 1: Able to connect to the server side with URL and API key
- Knows the RHEL AI port - needs to be documented
Step 2: User can see available providers (pre-configured for them)
Step 3: User uses the UI to get a docling schema of their input documents and qna.yamls
OR user uses the pre-processing endpoint
Step 4: Use the SDG endpoint to submit the docling schema of input docs/dataset and qna.yamls - gets UUID in return, with progress updates
Step 5: Use SDG output to kick off training - get UUID in return that check progress
Step 6: Evaluate model(s)
Requirements (mandatory -_ Complete while in Refinement status):
A list of specific needs, capabilities, or objectives that a Feature must deliver to satisfy the Feature. Some requirements will be flagged as MVP. If an MVP gets shifted, the Feature shifts. If a non MVP requirement slips, it does not shift the feature.
Requirement | Notes | isMVP? |
---|---|---|
Done - Acceptance Criteria (mandatory - Complete while in Refinement status):
Acceptance Criteria articulates and defines the value proposition - what is required to meet the goal and intent of this Feature. The Acceptance Criteria provides a detailed definition of scope and the expected outcomes - from a users point of view
…
<your text here>
Use Cases - i.e. User Experience & Workflow: (Initial completion while in Refinement status):
Include use case diagrams, main success scenarios, alternative flow scenarios.
<your text here>
Out of Scope _{}(Initial completion while in Refinement status):{_}
High-level list of items or persona’s that are out of scope.
<your text here>
Documentation Considerations _{}(Initial completion while in Refinement status):{_}
Provide information that needs to be considered and planned so that documentation will meet customer needs. If the feature extends existing functionality, provide a link to its current documentation..
<your text here>
Questions to Answer _{}(Initial completion while in Refinement status):{_}
Include a list of refinement / architectural questions that may need to be answered before coding can begin.
- How do users know the ‘order’ (i.e. UI/pre-processing -> SDG -> train etc) in which to call these endpoints?
- How do users know which models to use as student/teacher?
- How do users know API inputs and outputs?
- What do users get as the output of SDG? Do they see the knowledge, skills train jsonls, eval jsonls?
- What is returned to user at the end of train?
- Can this model be automatically pushed to S3/OCI/HF - with user input and auth?
- Where are data and model artifacts stored? - on the client side and/or server side?
- Other Client CLI 'experience-related' work - to be discovered
Background and Strategic Fit (Initial completion while in Refinement status):
Provide any additional context is needed to frame the feature.
<your text here>
Customer Considerations _{}(Initial completion while in Refinement status):{_}
Provide any additional customer-specific considerations that must be made when designing and delivering the Feature.
<your text here>
Team Sign Off (Completion while in Planning status)
- All required Epics (known at the time) are linked to the this Feature
- All required Stories, Tasks (known at the time) for the most immediate Epics have been created and estimated
- Add - Reviewers name, Team Name
- Acceptance == Feature as “Ready” - well understood and scope is clear - Acceptance Criteria (scope) is elaborated, well defined, and understood
- Note: Only set FixVersion/s: on a Feature if the delivery team agrees they have the capacity and have committed that capability for that milestone
Reviewed By | Team Name | Accepted | Notes |
- …
- relates to
-
RHELAI-3632 'llama-stack-client configure' doesn't return an error if user specifies wrong port
-
- Closed
-
-
RHELAI-3631 e2e workflow - discovery work with providers
-
- New
-