-
Feature
-
Resolution: Done
-
Major
-
None
-
None
-
False
-
-
False
-
0% To Do, 0% In Progress, 100% Done
-
-
Feature Overview
The Model Context Protocol (MCP)[1] allows extending Large Language Models reach to external systems for information retrieval and control very efficiently and with low overhead, alleviating the need for manual context transfer or complex RAG pipelines and infrastructure.
[1] https://modelcontextprotocol.io/
Goals
The goals of this outcome are:
- To ensure that all TektonCD resources can be accessed via an MCP implementation that can be consumed by OpenShift Lightspeed or a third party solution.
- To show Red Hat leadership in driving MCP implementations and the standardization of them into our upstream communities.
- To better understand current limitations around MCP, and also limitations of off-the-shelf models when interacting with Tekton data in order to find out how a specialized training could look like.
Requirements
Requirements | Notes | IS MVP |
---|---|---|
Establish an Open Source project in the upstream community for the MCP implementation that accepts and encourages participation and contributions | We'll discuss upstream to create a project under tektoncd organization | Y |
Deliver an MCP server implementation that can interact with TektonCD resources (starting with tektoncd/pipeline object first) | See https://docs.google.com/document/d/1GgTafD6z6dgLW-Paf9DWhRpsnz_gjb8q0Zb0riEbU0Q/edit for initial ideation | Y |
User can supply an Kubernetes config/token to the MCP server for authenticating to the backend | Can be through environment, config file, ... | Y |
Prepare a demo for Red Hat Summit 2025 using the MCP implementation | - | Y |
Use Cases
The MCP server implementation is intended to provide context from a live Tekton instance to an LLM, so that a user can interface with Tekton through an LLM.
As of now, we purely target a tech demo to be presented at Summit 2025. As a side-effect, we establish ourselves as AI drivers in the upstream community.
Out of scope
- Specialized LLM training or re-training to understand Tekton data. The scope of this feature is to provide the MCP server implementation, and to draw conclusions how good today's off-the-shelf models are in understanding that data.
- Productization. We do not need to deliver any productized version of the MCP server implementation as of now. The work is purely upstream for now.
- No multi-tenancy or user specific authentication is required as of now. It is sufficient if the MCP server uses credentials configurable at startup time, e.g. provide a Kubernetes config/token via environment or similar.
Dependencies
No dependencies for this feature.
Background, and strategic fit
There are a lot of unknowns right now. The MCP specification is a moving target, and there are a couple of SDK implementations for various languages out there. Some more, others less complete.
As for SDKs, I think we are to chose from the following:
- python-sdk: https://github.com/modelcontextprotocol/python-sdk
- mcp-go (inofficial): https://github.com/mark3labs/mcp-go
As Tekton doesn't have (yet) a Python SDK, the initial MVP target is to use mcp-go.
Assumptions
We assume to use off-the-shelf LLMs for testing the MCP implementation. Something that could be developer friendly is goose (https://github.com/block/goose), which runs on your local machine and can interface with MCP.
Customer Considerations
< Are there specific customer environments that need to be considered (such as working with existing h/w and software)?>
Documentation/QE Considerations
At this point in time, no extensive documentation is required.
We do not expect to require any QE activities for this particular feature.
Impact
< If the feature is ordered with other work, state the impact of this feature on the other work>
Related Architecture/Technical Documents
https://docs.google.com/document/d/1GgTafD6z6dgLW-Paf9DWhRpsnz_gjb8q0Zb0riEbU0Q/edit
Definition of Ready
- The objectives of the feature are clearly defined and aligned with the business strategy.
- All feature requirements have been clearly defined by Product Owners.
- The feature has been broken down into epics.
- The feature has been stack ranked.
- Definition of the business outcome is in the Outcome Jira (which must have a parent Jira).