Uploaded image for project: 'Red Hat Enterprise Linux AI'
  1. Red Hat Enterprise Linux AI
  2. RHELAI-3597

[ilab] Running Granite 3.1-8B-Base Starter as a student model (and inference testing) in ilab tuning flow

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • RHELAI-3557RHEL AI Third-Party Model Validation Deliverables for Summit '25

      Feature Overview:
      This Feature card is part of validating 3rd-party student models with the Instructlab component for RHELAI 1.5.As a currently supported student model, Granite 3.1 8B starter needs to go through the same evaluation process as the new models to ensure consistent benchmarking. https://catalog.redhat.com/software/containers/rhelai1/modelcar-granite-3-1-8b-starter-v1/6798abc65115f7fe1314d7c7 

      Goals:

      • Run Granite 3.1 8B as a student model successfully in the Instructlab tuning flow, tuned by current teacher, Mixtral 8x7B Instruct (mixtral-8x7b-instruct-v0-1)
      • Create a fine-tuned Granite 3.1 8B student
      • Hand off the Model to PSAP Team for model validation - email/slack rh-ee-rogreenb when completed -so they can run OpenLLM Leaderboard v1/v2 evals between base model and fine-tuned model

       Out of Scope [To be updated post-refinement]:

      •  

      Requirements:

      • Accuracy evaluation requirements:
        • Handoff the base and fine-tuned model to the PSAP team - email/slack rh-ee-rogreenb when completed - to perform OpenLLM Leaderboard v1/v2 evaluation 

      Done - Acceptance Criteria:

      • Base and finetuned model handover to PSAP
      • Student model performance before tuning and after tuning on OpenLLM Leaderboard v1/v2 is comparable and there isn't a significant accuracy degradation +- 5 points - TBD

      Use Cases - i.e. User Experience & Workflow:

      N/A no changes

      Documentation Considerations:

      N/A no changes

      Questions to answer: 

      Background & Strategic Fit:

      Customers have been asking to leverage the latest and greatest third-party models from Meta, Mistral, Microsoft, Qwen, etc. within Red Hat AI Products. As our they continue to adopt and deploy OS models, the third-party model validation pipeline provides inference performance benchmarking and accuracy evaluations for third-party models to give customers confidence and predictability bringing third-party models to Instruct Lab and vLLM within RHEL AI and RHOAI.

      See Red Hat AI Model Validation Strategy Doc

      See Redhat Q1 2025 Third Party Model Validation Presentation

              rh-ee-rogreenb Rob Greenberg
              rh-ee-rogreenb Rob Greenberg
              Jenny Yi
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: