-
Feature
-
Resolution: Unresolved
-
Critical
-
None
-
None
-
Product / Portfolio Work
-
-
False
-
-
False
-
None
-
None
-
Program Call
-
None
-
None
-
-
None
-
None
-
None
-
None
Feature
The OCI Volume Source feature allows Kubernetes workloads to directly mount OCI Block Volumes, providing flexible and performant storage solutions for applications. By enabling OCI Volume Source in OpenShift, AI workloads benefit from OCI's scalable storage and high I/O performance, which are critical for data-intensive machine learning (ML) and deep learning (DL) workloads. This integration allows AI applications in OpenShift to utilize OCI storage seamlessly, accelerating data access and processing times essential for AI model training and inference.
AI workloads often require fast and scalable storage to handle large datasets, model checkpoints, and logging. For OpenShift users, OCI Volume Source integration simplifies managing persistent storage, ensuring high-throughput and low-latency data access, which is particularly beneficial for AI model training, where storage speed can impact training duration and model accuracy. The OCI Block Volumes also support high availability, making them suitable for resilient AI workloads in production environments.
Use Case
As a data scientist, MLOps engineer, or AI developer, I want to mount large language model weights or machine learning model weights in a pod alongside a model-server, so that I can efficiently serve them without including them in the model-server container image. I want to package these in an OCI object to take advantage of OCI distribution and ensure efficient model deployment. This allows to separate the model specifications/content from the executables that process them.
- blocks
-
OCPSTRAT-1758 Artifacts support in CRI-O for AI images - Dev Preview
-
- In Progress
-