Uploaded image for project: 'OpenShift Pipelines'
  1. OpenShift Pipelines
  2. SRVKP-8577

Pipelines-as-Code: AI/LLM Integration on Pull Request

XMLWordPrintable

    • 12
    • False
    • Hide

      None

      Show
      None
    • False

      Feature Goal

      • Deliver an AI/LLM-powered analysis framework within Pipelines as Code (PaC) that enhances CI/CD diagnostics by automatically summarizing failures, identifying probable root causes, and suggesting corrective actions.
      • Establish configurable AI-driven analysis scenarios (e.g., general failure, security checks, test flakiness, pipeline summaries) to support different developer needs using a single unified interface.
      • Provide an extensible foundation designed to support future integration with OpenShift LightSpeed and compatible external AI providers, enabling future intelligent DevOps automation in Tekton and OpenShift.

      Why is this important?

      • Developers currently spend excessive time investigating failed pipeline runs, manually parsing logs and error traces.
      • This feature improves productivity and developer experience by embedding contextual, AI-generated insights directly into pull requests or pipeline outputs.
      • It introduces a scalable AI integration point that can evolve toward more advanced use cases such as performance summaries, dependency impact analysis, or automated remediation.
      • Future LightSpeed integration will align PaC with OpenShift’s broader AI-driven user assistance strategy and centralised

      Scenarios

      • Developer: Receive concise AI-generated explanations when a pipeline fails, highlighting root causes and recommended fixes.
      • Repository Maintainer: Configure which AI roles (e.g., security, test reliability, dependency updates) apply to pipelines without modifying code.
      • Security Engineer: Enable targeted AI analysis that detects vulnerabilities or dependency risks based on build logs and scanner outputs.
      • CI User: View pipeline summaries for both successful and failed runs, offering consistent visibility into build health.
      • Platform Team: Integrate organization-wide AI settings, control API keys, and manage providers centrally while allowing teams to extend with custom prompts.

      Future Integration

      Delegate inference to OpenShift LightSpeed when available, using unified credentials, centralised prompts and policy controls.
       

      Acceptance Criteria (Mandatory)

      • CI - MUST be running successfully with automated test coverage for the AI/LLM analysis feature.
      • LLM-based analysis MUST provide summaries or recommendations for failed pipeline runs when enabled.
      • System MUST gracefully handle unavailable or failed AI provider requests with clear user messaging.
      • Repository-level configuration MUST allow flexible provider, role, and trigger setup without code changes.
      • All AI responses MUST indicate that the content is machine-generated and best-effort.
      • Release Technical Enablement - Provide documentation, configuration samples, and upgrade notes.
      • Architecture MUST allow future integration with OpenShift LightSpeed without redesign.

      Dependencies (internal and external)

      • Secure API key management via Kubernetes secrets.
      • Existing Tekton Pipelines and PaC log retrieval mechanisms.
      • Governance for provider usage (OpenAI, Gemini, or internal equivalents).
      • OpenShift LightSpeed — future dependency once supported for inference delegation.

      Previous POC

              cboudjna@redhat.com Chmouel Boudjnah
              cboudjna@redhat.com Chmouel Boudjnah
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: