-
Feature
-
Resolution: Won't Do
-
Critical
-
rhelai-1.5
Feature Overview:
This Feature card is part of validating 3rd-party student models with the Instructlab component for RHELAI 1.5.As a currently supported teacher model, Mixtral 8x7B needs to go through the same evaluation process as the new models to ensure consistent benchmarking.
Goals:
- Run Mixtral 8x7B (mixtral-8x7b-instruct-v0-1) as a teacher model successfully in the Instructlab tuning flow, tuning the current default student, Granite 3.1 8B
- Create a fine-tuned Granite 3.1 8B student
Out of Scope [To be updated post-refinement]:
Requirements:
- Create a fine-tuned Granite 3.1 8B
Done - Acceptance Criteria:
Model Quantization Level Confirmed [Granite 3.1 8B Base Starter|https://catalog.redhat.com/software/containers/rhelai1/modelcar-granite-3-1-8b-starter-v1/6798abc65115f7fe1314d7c7] Baseline
- Base and finetuned model handover to PSAP
- Student model performance before tuning and after tuning on OpenLLM Leaderboard v1/v2 is comparable and there isn't a significant accuracy degradation +- 5 points - TBD
Use Cases - i.e. User Experience & Workflow:
N/A no changes
Documentation Considerations:
N/A no changes
Questions to answer:
- https://issues.redhat.com/browse/RHELAI-3559 - Refer to open questions here.
Background & Strategic Fit:
Customers have been asking to leverage the latest and greatest third-party models from Meta, Mistral, Microsoft, Qwen, etc. within Red Hat AI Products. As our they continue to adopt and deploy OS models, the third-party model validation pipeline provides inference performance benchmarking and accuracy evaluations for third-party models to give customers confidence and predictability bringing third-party models to Instruct Lab and vLLM within RHEL AI and RHOAI.
See Red Hat AI Model Validation Strategy Doc
See Redhat Q1 2025 Third Party Model Validation Presentation
- clones
-
RHELAI-3597 [ilab] Running Granite 3.1-8B-Base Starter as a student model (and inference testing) in ilab tuning flow
-
- Refinement
-