-
Outcome
-
Resolution: Unresolved
-
Critical
-
None
-
None
-
None
-
False
-
-
False
-
Not Selected
-
-
-
-
Yes
CUSTOMER PROBLEM
As enterprises operationalize AI workloads—including models, agentic frameworks, and MCP Servers—security and platform teams lack a consistent way to detect, verify, and continuously monitor these components across build, deploy, and runtime.
AI assets often originate from third-party sources such as model registries, open model hubs, or internal pipelines, and are introduced into environments without visibility or integrity validation. This exposes organizations to:
- Model tampering or substitution, where compromised models replace trusted ones.
- AI BOM blind spots, where model or tool provenance is missing or unverifiable.
- Runtime uncertainty, where AI agents or MCP Servers execute unapproved or unsafe actions.
- No integrated developer feedback loop, leaving DevOps and AI platform teams unaware of security posture during model deployment or execution.
RHACS will address these gaps by extending its runtime and supply chain security capabilities to:
- Detect AI workloads and their toolchains (models, MCP Servers, frameworks) across clusters, VMs, and registries.
- Ingest and generate AI BOMs that provide detailed provenance metadata.
- Verify signatures and integrity using Sigstore.
- Continuously monitor runtime behavior, tool usage, and drift.
- Integrate RHACS APIs with MCP and OpenShift Lightspeed to make AI security insights available directly within the developer and AI operator workflows.
This enables a unified, AI-native runtime security fabric that protects the full lifecycle—from model ingestion to execution.
USERS
- Platform Security Engineers: Extend RHACS runtime policies to cover AI workloads and enforce integrity verification.
- Cluster Admins / DevSecOps Engineers: Automate validation and policy enforcement for AI models, MCP Servers, and frameworks.
- Compliance Officers / Risk Managers: Gain audit-ready visibility into AI workload provenance, signatures, and runtime behavior.
ACCEPTANCE CRITERIA
- CI/CD Integration: Automated tests for AI workload detection, BOM ingestion, and signature verification.
- Release Enablement: Updated RHACS documentation, MCP API extensions, and Lightspeed integration samples.
- AI BOM Support:
-
- Ingest and generate AI BOMs for models, frameworks, and MCP Servers.
-
- Verify provenance and integrity via Sigstore.
- Runtime Guardrails:
-
- Integrate with OSS guardrail tools (e.g., Garak, IBM ART) for drift and anomaly detection.
-
- Alert on unauthorized tool invocations or unverified workloads.
- MCP + Lightspeed Integration:
-
- Expose RHACS AI security APIs for consumption by MCP.
-
- Enable AI security insights and recommendations within OpenShift Lightspeed developer experiences.
- Telemetry / API:
-
- Provide queryable endpoints for AI BOM data, signature validation status, and runtime security alerts.
QUESTIONS
- How can RHACS accurately discover and tag AI workloads within heterogeneous clusters and registries?
- Which AI BOM schema (SPDX, CycloneDX-AI) ensures the best interoperability for model provenance tracking?
- How can Sigstore signing workflows be seamlessly extended to AI models and tools?
- How should RHACS expose AI workload data via MCP APIs to power Lightspeed experiences?
- What telemetry is required to detect agentic behavior drift and runtime policy violations?
ACTIONS
RHACS will enable users to:
- Automatically detect and classify AI workloads.
- Ingest and generate AI BOMs, verifying provenance via Sigstore.
- Monitor runtime behavior for drift or anomalous activity.
- Expose AI BOM and runtime security data to MCP APIs for integration with OpenShift Lightspeed.
- Automate enforcement and response actions for compromised or unverified workloads.
CONSIDERATIONS
- API Design: RHACS APIs must be extensible for MCP consumption and securely scoped for Lightspeed access.
- Integration Points: Sigstore (signing), Garak (runtime analysis), IBM ART (adversarial detection).
- Performance: Efficient handling of large model artifacts and distributed AI BOM synchronization.
- Future Direction: Enable bi-directional integration with MCP for proactive security recommendations and model posture awareness.
- Backward Compatibility: Maintain compatibility with existing SBOM and image scanning capabilities.
UX/UI
- Add AI Workload View to RHACS with AI BOM visualization and signature validation.
- Provide Runtime Guardrail Panel highlighting drift and anomalous behavior.
- Validation with OpenShift Lightspeed
DELIVERY PRIORITY
This section should outline the desired order of delivery for stories comprising this epic. For example:
| Phase | Timeline | Key Deliverables |
|---|---|---|
| Phase 1 | 2Q–3Q 2026 | AI workload detection, AI BOM ingest/generate, Sigstore verification |
| Phase 2 | 4Q 2026-1H 2027 | Runtime guardrail integrations, drift detection, MCP API exposure |
| Phase 3 | 2027+ | Full AI-native compliance, multi-tenant observability, Lightspeed integration with automated incident response |