Uploaded image for project: 'AI Platform Core Components'
  1. AI Platform Core Components
  2. AIPCC-3165

Validate RHAIIS Releases via JBenchmark

    • Icon: Initiative Initiative
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • None
    • Model Validation
    • None
    • False
    • Hide

      None

      Show
      None
    • False

      Add support to the JBenchmark system for validating official releases of Red Hat AI Inference Server (RHAIIS).

       

      The goal is to ensure that each RHAIIS version is benchmarked systematically before or immediately after GA, covering key performance metrics across representative workloads.

       

      This includes:

      • Automatically triggering benchmark runs upon release of new RHAIIS versions (or via manual input)
      • Running benchmarks on stable, production-grade configurations using Red Hat-validated models
      • Comparing results against prior releases to track performance regressions/improvements
      • Tagging results clearly with the RHAIIS version for reporting and filtering
      • Generating structured output for downstream use in AI Hub*****

       

       

      Acceptance Criteria:

       

       

      • Ability to specify RHAIIS version as a parameter in benchmark config
      • Benchmarks run and stored with version metadata
      • Reports clearly show performance delta between RHAIIS versions
      • Validation logic flags regressions (e.g., TTFT, throughput drops)
      • Results pushed to shared storage and integrated into customer-facing dashboards

       

              rh-ee-abadli Aviran Badli (Inactive)
              rh-ee-abadli Aviran Badli (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: