Uploaded image for project: 'OpenShift Pipelines'
  1. OpenShift Pipelines
  2. SRVKP-7682

Learn about MCP and understand tektoncd/mcp-server

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • None
    • None
    • Pipelines Sprint Crookshank 30, Pipelines Sprint Crookshank 31

      Story (Required)

      As a Engineer contributing to openshift-pipelines I want to upskill and contirbute to AI initaitives within the team

      Background (Required)

      [ Check for mails from Shai ]

      By the end of Q2, 100% of the organization will have made investments in AI through targeted AI training programs, adoption of AI-driven tools in daily workflows, or the integration of AI innovations into our portfolio or core business operations, targeting improvements in productivity.

      This key result is broken down into three options to choose from (we are not expecting everyone to innovate or be involved in all three.) 

      1. Training: Conduct AI literacy and hands-on training sessions, there is a variety of training available on the product engineering source page.
      1. Adoption: Ensure that you adopt at least one AI-powered tool or process improvement in your day to day work, such as AI analytics, automation, or machine learning-based tools. Please refer to the available innovations on the source page or the list of innovations that are developed in our teams.
      1. Innovation: Enhance (join existing innovation development), pilot, or implement AI-driven solutions that increase productivity, customer experience, or operational efficiency.

      In the end, our goal is for all associates to understand the AI landscape and continuously work to adopt tools that will boost productivity (2nd option). We acknowledge that this may take time and needs to be balanced with the day-to-day tasks and product priorities, but please know that you will receive the support to invest time in this. Experimenting is key to boost your experience and knowledge and we recognize that not all attempts will be successful, therefore it is perfectly fine to try something new and realize it is not as efficient as you thought to begin with.

      We will track the progress of this KR by reviewing what each associate has accomplished during the quarter. For that we ask you to fill this form during the Q2 Quarterly Connection conversations (or before). 

      Available IT resources can be found here.

      Out of scope

      <Defines what is not included in this story>

      Approach (Required)

      1. Upskill -  courses, read articles on AI
      2. Adopt  -  Adopt Agents (Cursor) for code development and unit tests
      3. Contribute - Tekton MCP server

      Dependencies

       

      Check mail from Chris 

      Internal AI sandboxes are now available for all engineering associates

      Models.corp[1] is a model hosting platform that provides easy sandbox access to open AI models that have been pre-approved for sandbox usage. With the provided API access to these models, you can experiment with things like:

      • prompt engineering and inference query tuning
      • tool calling and agent integration
      • developing applications that call or integrate with these API endpoints
      • model reasoning evaluation

      **

      The MOSAIC platform[2] sandboxes provide access to a managed OpenShift AI cluster with a pool of reservable GPUs to run your own code and models. With this offering, you’ll be able to experiment with things like:

      • notebooks or running your own services in OpenShift AI
      • deploying your own AI tools directly in OpenShift
      • Hosting experimental applications that make use of Models.corp’s APIs

      **

      These internal AI sandbox offerings will allow you to test your ideas and learn about AI quickly, but they are not designed to support significant performance or scale requirements. They provide small quotas of resources, from shared pools, with 48-hour usage windows. These resources should be enough to try things and test feasibility and function, but will not be sufficient for most real solutions.

      **

      Other non-sandbox AI offerings are available to support the proper incubation of your ideas, with better performance and scalability, once you’ve completed your experimentation and learning in the sandboxes. More on this in a moment. 

      **

      Before requesting access to the sandboxes, review the acceptable use policy[3] and the wiki on approved models and data sets for experimentation[4][ which |https://source.redhat.com/groups/public/ai/frequently_asked_questions/acceptable_use_policy__ai_experimentation]explains how Red Hat associates should use the sandboxes, AI, and data appropriately and compliantly. 

      **

      A community of sandbox users has started to form in #help-it-ai-platforms[5] on Slack. Use the channel to request support, but please also consider sharing how you’re using the sandboxes and your tips for others. 

      **

      The path from sandboxes to production 

      As you move along the path of ideation to experimentation and then to incubation to production, you’ll naturally grow out of these sandbox environments. If you have an idea you believe can make Red Hat more efficient or drive value, and your sandbox experimentation makes you believe it is worth investing in, I encourage you to submit it to the AI Combinator[6] and discuss getting access to non-sandbox services. 

      **

      We relaunched AI Combinator in Q1 with the additional infrastructure and processes needed to incubate good ideas with resources, evaluate them against business needs, and ultimately put them into production within Red Hat. I look forward to seeing what you create! **

      And for those of you still working on your AI-related goal for this quarter, here’s a list of experimental/learning ideas[7] to consider and talk over with your manager.

      Acceptance Criteria (Mandatory)

      Submit form by end of Q2

      INVEST Checklist

      Dependencies identified

      Blockers noted and expected delivery timelines set

      Design is implementable

      Acceptance criteria agreed upon

      Story estimated

      Legend

      Unknown

      Verified

      Unsatisfied

      Done Checklist

      • Code is completed, reviewed, documented and checked in
      • Unit and integration test automation have been delivered and running cleanly in continuous integration/staging/canary environment
      • Continuous Delivery pipeline(s) is able to proceed with new code included
      • Customer facing documentation, API docs etc. are produced/updated, reviewed and published
      • Acceptance criteria are met

              diagrawa Divyanshu Agrawal
              diagrawa Divyanshu Agrawal
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: