-
Story
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
False
-
-
False
-
-
Story (Required)
As a developer trying to integrate LLM analysis into my PR workflow I want LLM analysis results posted as GitHub check runs so that I can see AI insights alongside other CI checks and use them as required status checks
This feature enables LLM analysis results to be posted as native GitHub check runs, similar to how test results and linters appear in the PR checks section. This improves visibility by placing AI insights where developers naturally look for CI feedback, and enables teams to require LLM approval before merging.
Background (Required)
Currently, LLM analysis only supports pr-comment as an output destination. While comments are useful, they:
- Get buried in long PR discussion threads
- Cannot be used as required status checks for branch protection
- Don't provide a clear pass/fail signal in the PR checks UI
- Lack the structured summary/detail separation that check runs provide
GitHub check runs are first-class CI status objects that appear prominently in the PR UI, can be marked as required, and support rich markdown output.
Related: Current LLM analysis implementation at docs/content/docs/guide/llm-analysis.md
Out of scope
- Support for other Git providers (GitLab MR statuses, Bitbucket build statuses) - covered in separate stories
- Custom check run styling or branding beyond GitHub's standard check run API
- Updating check runs dynamically as new analysis is performed (check runs are immutable after creation in this story)
- Integration with GitHub Actions or other third-party check systems
Approach (Required)
High-level technical approach:
Add check-run as a new valid value for the output field in AnalysisRole configuration
Create a GitHub check run when LLM analysis completes for a role configured with this output
Map LLM analysis results to check run status
*Successful analysis → check run conclusion (success, neutral, or failure based on content/severity)
* Failed analysis → check run conclusion: failure with error details
Include the full LLM response in the check run's detailed output (supports markdown)
Support multiple outputs simultaneously (e.g., output: "pr-comment,check-run")
Ensure check runs are created with appropriate metadata (name from role name, timestamps, etc.)
The feature should work alongside existing output destinations without breaking backward compatibility.
Dependencies
- Existing LLM analysis infrastructure and pr-comment output implementation
- GitHub provider implementation with check runs API access
- GitHub App or token with checks:write permission
- Repository CRD must support check-run as a valid output option
Acceptance Criteria (Mandatory)
Given a Repository with an LLM role configured with output: "check-run", When LLM analysis completes, Then a GitHub check run is created on the PR with the analysis results
Given a successful LLM analysis, When the check run is created, Then the check run shows a success/neutral conclusion and includes the full LLM response in the detailed output
Given a failed LLM analysis, When the check run is created, Then the check run shows a failure conclusion with error details explaining what went wrong
Given a role with multiple outputs configured (e.g., output: "pr-comment,check-run"), When analysis completes, Then both a PR comment and check run are created
Given a check run is created, When a developer views the PR, Then the check run appears in the PR's checks section with a clear name derived from the role name
Given multiple LLM roles with check-run output, When all analyses complete, Then each role creates a separate, identifiable check run
Given insufficient GitHub permissions, When attempting to create a check run, Then the system logs an error and falls back gracefully without blocking the pipeline
Edge cases to consider:
- Handling very long LLM responses that exceed GitHub check run size limits
- Check run naming conflicts when multiple roles have similar names
- GitHub API rate limits when creating many check runs
- Retrying check run creation on transient GitHub API failures
- Repositories using GitHub tokens vs GitHub Apps (different permission models)