-
Feature
-
Resolution: Duplicate
-
Major
-
None
-
None
-
None
-
None
Feature Overview (aka. Goal Summary)
An elevator pitch (value statement) that describes the Feature in a clear, concise way. Complete during New status.
Create a Model Context Protocol (MCP) server implementation with support for core OpenShift that
- enables enables direct, flexible, and automated management of OpenShift clusters.
- interacts directly with the Kubernetes API.
- enables AI agents to interact with and manage OpenShift clusters using natural language and other tools.
- Translates requests from AI tools into Kubernetes operations (i.e. Kubernetes understands, such as kubectl, helm, or custom API calls) and returns results in a format the AI can understand.
See also
upstream: https://github.com/containers/kubernetes-mcp-server/
downstream: https://github.com/openshift/openshift-mcp-server
Goals (aka. expected user outcomes)
Feature Goal
We want to announce dev preview of the upstream our kubernetes-mcp-server by end of 2025, ideally within Q3 2025 through a series of blogs.
User Goals
*The observable functionality that the user now has as a result of receiving this feature. Include the anticipated primary user type/persona and which existing features, if any, will be expanded. Complete during New status.
- Users (primarily DevOps engineers, SREs, and AI/automation platform developers) want to perform OpenShift operations such as CRUD on any resource, pod management, event viewing, operator installation, Helm chart installation, and project/namespace listing through a standardized MCP interface or a chatbot.
- Users want to automate cluster management tasks, troubleshoot issues, performing scaling operations, and integrate Kubernetes control into AI-driven workflows or IDEs (e.g., Cursor, VS Code, Claude Desktop).
User Stories
- As a DevOps engineer managing a production Kubernetes cluster, I want the AI assistant to hep me diagnose and resolve pod failures in real-time through natural language commands, so that I can quickly restore service without manual oc / kubectl debugging or context switching.
- As a platform reliability engineer for an e-commerce service, I want the MCP server to automatically scale-up my cluster based on real-time demand, so that during flash sales events, the system maintains <100ms latency for my e-commerce service, and scale down when no longer needed, while avoiding over-provisioning costs during low-traffic periods.
Requirements (aka. Acceptance Criteria):
A list of specific needs or objectives that a feature must deliver in order to be considered complete. Be sure to include nonfunctional requirements such as security, reliability, performance, maintainability, scalability, usability, etc. Initial completion during Refinement status.
Published blogs regarding the upstream kubernetes-mcp-server by end of 2025, ideally within Q3 2025.{}
Functional requirements
- Must support CRUD operations on all OpenShift/Kubernetes resources.
- Must provide pod-specific functions (list, get, delete, logs, exec, run, resource usage).
- Must support Operator and Helm chart management (install, list, uninstall, update).
- Must allow namespace and project listing, and event viewing.
- Must be configurable via CLI arguments (e.g., ports, logging, kubeconfig, output format, read-only mode) and chat assistant.{}
Nonfunctional requirements
- Support read-only and non-destructive modes to prevent accidental changes; must not expose sensitive data.
- Write actions require user approval (human in the loop)
- Must minimize latency by interacting directly with the Kubernetes API, not via external tools.
- Must support multiple concurrent requests and be suitable for integration in distributed automation workflows.
- Should be easy to configure and integrate into developer tools, automation platforms, AI agents, and AI chat experiences.
- Must have benchmarking to assess the performance of LLM models for OpenShift/Kubernetes related tasks.
Anyone reviewing this Feature needs to know which deployment configurations that the Feature will apply to (or not) once it's been completed. Describe specific needs (or indicate N/A) for each of the following deployment scenarios. For specific configurations that are out-of-scope for a given release, ensure you provide the OCPSTRAT (for the future to be supported configuration) as well.
Deployment considerations | List applicable specific needs (N/A = not applicable) |
Self-managed, managed, or both | |
Classic (standalone cluster) | |
Hosted control planes | |
Multi node, Compact (three node), or Single node (SNO), or all | |
Connected / Restricted Network | |
Architectures, e.g. x86_x64, ARM (aarch64), IBM Power (ppc64le), and IBM Z (s390x) | |
Operator compatibility | |
Backport needed (list applicable versions) | |
UI need (e.g. OpenShift Console, dynamic plugin, OCM) | |
Other (please specify) |
Use Cases (Optional):
Include use case diagrams, main success scenarios, alternative flow scenarios. Initial completion during Refinement status.
<your text here>
Questions to Answer (Optional):
Include a list of refinement / architectural questions that may need to be answered before coding can begin. Initial completion during Refinement status.
<your text here>
Out of Scope
High-level list of items that are out of scope. Initial completion during Refinement status.
<your text here>
Background
Provide any additional context is needed to frame the feature. Initial completion during Refinement status.
The Model Context Protocol (MCP) is an open standard introduced by Anthropic in 2024 to standardize how AI systems (e.g. LLMs) interact with external tools, data sources, and environments. Think of it as a standardized "port" or "bridge" that allows AI models to easily access and use information from various systems. MCP simplifies the process of integrating AI with external data and services, making it easier to build more powerful and versatile AI applications.
MCP follows a client-server model. An AI application (the client) can connect to various MCP servers, which expose different tools, data, or functionalities.
Customer Considerations
Provide any additional customer-specific considerations that must be made when designing and delivering the Feature. Initial completion during Refinement status.
<your text here>
Documentation Considerations
Provide information that needs to be considered and planned so that documentation will meet customer needs. If the feature extends existing functionality, provide a link to its current documentation. Initial completion during Refinement status.
<your text here>
Interoperability Considerations
Which other projects, including ROSA/OSD/ARO, and versions in our portfolio does this feature impact? What interoperability test scenarios should be factored by the layered products? Initial completion during Refinement status.
<your text here>
- clones
-
OCPSTRAT-2270 [Tech Preview] OpenShift Core MCP Server - Duplicate of OCPSTRAT-2465
-
- Closed
-
- is related to
-
OCPSTRAT-2201 [Dev Preview] Kubernetes Core MCP Server by end of 2025
-
- Release Pending
-
- links to