• 80% To Do, 13% In Progress, 7% Done
    • False
    • Hide

      None

      Show
      None

      Outcome Overview

      Our objective for OpenShift is to address key customer challenges in (1) managing a hybrid infrastructure, (2) accelerating new application development and application modernization, (3) offering a platform for artificial intelligence and (4) automating DevSecOps processes across a hybrid cloud environment. These four pillars shape our OpenShift strategy and are at the core of Red Hat’s Open Hybrid Cloud strategy. 

      OpenShift’s path to growth relies on driving adoption and customer expansion. We need to drive customers to bring more workloads to OpenShift, deploy across more infrastructures and leverage more capabilities of our cloud native application platform. New cloud native applications make up the majority of workloads on OpenShift (and Kubernetes in general). In order to drive OpenShift growth, we need customers to bring other workloads to the platform, to drive adoption and expansion. This also includes driving new workloads like AI, Data/Analytics, VMs, Telco 5G Cloud-Native Network Functions and other certified ISV applications. 

      The more long term, strategic opportunity for new workloads is AI, which is also part of many customers’ modernization strategies to build more intelligent apps. Growing interest in generative AI, driven by large language models like ChatGPT, present a great opportunity for Red Hat to provide a platform for model training/tuning, serving/ inferencing, development and MLOps across a hybrid environment with OpenShift AI.

      Furthermore, increased use of OpenShift for AI provides additional revenue opportunities with AI Accelerators. The investments that are part of this brief are directly related to our ability to price OpenShift with AI Accelerators, including:

      • Kubernetes Operators tested/certified for NVIDIA, Intel, AMD (Future), Qualcomm (Future), IBM AIU (Future).  Support from the 3rd party
      • Kubernetes Jobs for workloads on accelerators*
      • Kueue  (Future) for workloads on accelerators*
      • Kernel Module Management Operator for accelerator kernel module life cycle
      • Node Feature Discovery to expose accelerator node-level information
      • Kubernetes accelerator slicing via Instaslice*
      • Kubernetes[ Dynamic Resource Allocation|https://github.com/kubernetes/dynamic-resource-allocation] (Future) of accelerators*
      • Kubernetes Autoscaling for AI (Future) for accelerator workloads
      • Kubernetes Workload API (set’s) for AI (Future) for accelerator workloads
      • OCI serving of models to Kubernetes pods (Future) on accelerators*

      Segmentation (Future) of AI network traffic to accelerator nodes*

       

      Success Criteria

      What is the success criteria for this strategic outcome?  Avoid listing Features or Initiatives and instead describe "what must be true" for the outcome to be considered delivered.

       

      Expected Results (what, how, when)

      What incremental impact do you expect to create toward the company's Strategic Goals by delivering this outcome?  (possible examples:  unblocking sales, shifts in product metrics, etc. + provide links to metrics that will be used post-completion for review & pivot decisions). {}For each expected result, list what you will measure and when you will measure it (ex. provide links to existing information or metrics that will be used post-completion for review and specify when you will review the measurement such as 60 days after the work is complete)

       

       

      Post Completion Review – Actual Results

      After completing the work (as determined by the "when" in Expected Results above), list the actual results observed / measured during Post Completion review(s).

       

              julim Ju Lim
              rhn-coreos-nstielau Nicholas Stielau
              Daniel Messer, Derek Carr, Duncan Hardie, Gaurav Singh, Ju Lim, Mrunal Patel
              Votes:
              3 Vote for this issue
              Watchers:
              17 Start watching this issue

                Created:
                Updated: