Goal Summary
This feature will integrate AI-driven risk scoring into the RHACS. The goal is to provide more accurate, actionable, and transparent risk scores at both the individual CVE level and for the deployment. This will empower users to prioritize and remediate the most critical risks more effectively.
Goals and Expected User Outcomes
The primary user, a Security Analyst or DevOps Engineer, will gain a clearer, more proactive understanding of their security posture. The AI-driven system will provide a new level of risk intelligence, moving beyond basic CVE severity to a context-aware risk score.
- CVE Prioritization: Users will see new CVE scores enhanced with actual runtime deployment risk data
- Refined Overall Risk View: The main risk dashboard will display an enhanced, AI-driven overall risk score that reflects a more nuanced understanding of the environment's security posture.
- Transparent Risk Scoring: The UI will provide explainability for the risk scores, allowing users to understand the factors that influenced the score.
- Full investigation log of Risk analysis in a structured format.
Acceptance Criteria
- A risk score for individual CVEs to be generated by the AI model and integrated into the Vulnerabilities dashboard.
- The overall risk dashboard must display the new, AI-driven risk score.
- The UI must include explainability features that provide clear, factor-based explanations for risk scores.
- The system must be designed to allow the user to host the frontier model on their own infrastructure or via a cloud provider.
- The upstream project must be updated with the model's core code.
- The solution must be performant and not introduce significant latency to the dashboards.
Links
Questions/Unknowns:
- How will we package the Research team developed AI service?
- Overall architecture still needs whetting especially around how the Frontend will send request to this AI service
- DevPreview/Demo phase definition of done TBD.