Uploaded image for project: 'OpenShift Container Platform (OCP) Strategy'
  1. OpenShift Container Platform (OCP) Strategy
  2. OCPSTRAT-2236

AI Driven OpenShift Installation Experience (Preview)

XMLWordPrintable

    • Product / Portfolio Work
    • 38% To Do, 63% In Progress, 0% Done
    • False
    • Hide

      None

      Show
      None
    • False
    • None

      Outcome Overview

      An elevator pitch (value statement) that describes the Outcome in a clear, concise way.  Complete during New status.

      Create an AI driven OpenShift installation experience that leverages existing OpenShift installer(s), and allows the user (e.g. Admin, Developer, DevsOp Engineer, Citizen Developer/User) to drive the OpenShift installation using natural language prompting (e.g. a conversational chat interface).

      AI driven OpenShift installation experience

      Goals (aka. expected user outcomes)

      The observable functionality that the user now has as a result of receiving this feature. Include the anticipated primary user type/persona and which existing features, if any, will be expanded. Complete during New status.

      The goal is to develop a conversational AI experience for OpenShift installation, starting with a connected deployment on console.redhat.com, specifically targeting Assisted Installer on at least one of their support providers (Baremetal, vSphere, Nutanix, or OCI) and ROSA (aspirational goal) in 90 days.

      The AI assisted installation workflow will leverage a frontier LLM model for intelligent decision-making and interaction, and integrate with the MCP server (assisted-service-mcp) for execution of installation-related tasks.

      Requirements (aka. Acceptance Criteria):

      A list of specific needs or objectives that a feature must deliver in order to be considered complete.  Be sure to include nonfunctional requirements such as security, reliability, performance, maintainability, scalability, usability, etc.  Initial completion during Refinement status.

      Acceptance criteria includes:

      • A successfully created OpenShift cluster that is available for user workloads that is created through natural language prompting.
      • If user-provided prompt is insufficient, the response back to user should be informative so user would know that he/she needs to clarify or provide additional information to continue completing the task at hand.
      • All installer actions would need to be MCP coded.
      • All decisions on what is stored where and what the execution flow is between client, RHT, and the target.
      • Repeatable builds with Konflux.
      • Customer feedback on the AI driven install experience.
      • We want to have something customers can view and/or try in 90 days (ending Sept 1 2025).
      • Installation can be automated in cloud; on-premises installation is aspirational and not required for this first phase

      Use Cases (Optional):

      Include use case diagrams, main success scenarios, alternative flow scenarios.  Initial completion during Refinement status.

      <your text here>

      Questions to Answer (Optional):

      Include a list of refinement / architectural questions that may need to be answered before coding can begin.  Initial completion during Refinement status.

      • Where is and what model to use?
        • Any reasonably good enterprise-class LLM provider may be used, e.g. Gemini, OpenAI, Azure OpenAI, Claude/Sonnet, which may require RH approval to use. Note we are not limited to using a RH model or Granite.
      • Who is the target user?
      • Where does this get triggered from? For MVP, console.redhat.com, though long term we believe this will need to available from Ask RH.
      • Should this leverage Assisted Service or OCM or Cluster API (E.g. CAPA for ROSA)?
        • For the MVP, we want to target using the Assisted Service and ROSA (stretch goal) to create the cluster.
        • For Assisted Service, is this BYO infrastructure or we also setup the infrastructure? We'd like the infrastructure to be also setup.
      • What is the obligation on the client? What needs to be setup on the client first?
      • Where are credentials and config files store?
      • From where does the execution need to flow?

      Out of Scope

      High-level list of items that are out of scope.  Initial completion during Refinement status.

      • No new OpenShift installers will be created. We will leverage the existing OpenShift installer(s).
      • This is focused on the create only flow.  Full CRUD (lifecycling) including upgrade is out of scope for the initial phase.
      • Installing layered products is aspirational and not required for this phase.

      Background

      Provide any additional context is needed to frame the feature.  Initial completion during Refinement status.

      The convergence of chatbots, vibe coding, Model Context Protocol (MCP), AI agents, and AIOps, are fundamentally reshaping how customers want to interact with our software for management, lifecycling, development, troubleshooting, etc. Thus, we need to develop and experience products that applies AI to the customer experience to elevate our product experience to drive new levels of efficiency, insights, and customer satisfaction.

      Customer Considerations

      Provide any additional customer-specific considerations that must be made when designing and delivering the Feature.  Initial completion during Refinement status.

       

      The following are customer concerns that we will not address in the 90-day MVP but will need to for a productizable and/or long term solution.

      • Disconnected/air-gapped deployments
      • Data sovereignty

      Documentation Considerations

      Provide information that needs to be considered and planned so that documentation will meet customer needs.  If the feature extends existing functionality, provide a link to its current documentation. Initial completion during Refinement status.

      <your text here>

      Interoperability Considerations

      Which other projects, including ROSA/OSD/ARO, and versions in our portfolio does this feature impact?  What interoperability test scenarios should be factored by the layered products?  Initial completion during Refinement status.

      • console.redhat.com integration
      • Installer integration, e.g. Assisted Service
      • Lightspeed Core (and/or Llama Stack) integration
      • RAG to complement LLM responses to improve response accuracy
      • Telemetry system integration including conversation and feedback data collection and analysis
      • Frontier LLM model integration
      • Patternfly chatbot UI implementation

       

      Post Completion Review – Actual Results

      After completing the work (as determined by the "when" in Expected Results above), list the actual results observed / measured during Post Completion review(s).

       

              julim Ju Lim
              julim Ju Lim
              Asutosh Samal, Eran Cohen, Erik Jacobs, Jan Zeleny, John Sell, Josh Wilson, Ju Lim, Linh Nguyen, Lisa Lyman, Marcos Entenza Garcia, Mark Riggan, Michal Zasepa, Mrunal Patel, Nick Carboni, Oved Ourfali, Ramon Acedo, Rom Freiman, Sergey Yedrikov, Zane Bitter
              Zane Bitter Zane Bitter
              None
              None
              Eric Rich Eric Rich
              Votes:
              0 Vote for this issue
              Watchers:
              28 Start watching this issue

                Created:
                Updated: