Uploaded image for project: 'Red Hat OpenShift Data Science'
  1. Red Hat OpenShift Data Science
  2. RHODS-165

3.4. Model serving and deployment

XMLWordPrintable

    • Model serving and deployment
    • False
    • False
    • No
    • To Do
    • 100
    • 100% 100%
    • Undefined
    • No

      Data Science users need to be able to easily publish models created in Jupyter notebooks so they can be accessed by other users and ultimately deployed for use in applications. They do not want to have to go through a series of traditionally IT/dev ops functions to deploy their models. They are looking for a system that simplifies the publishing and deployment processes so they can perform these tasks within their data science workspace. Ideally, they want to be self sufficient in these tasks so they do not require assistance for IT/dev ops.

      Requirements:

      1. P0: The system must provide a GitHub template with sample files needed to support the model serving workflow.
      2. P1: TBD: The system must provide pipeline templates for model training and serving processes.
      3. P1: TBD: The system must support the ability to create pipelines for model serving processes. 
      4. P1: The system must support the ability to clone a GitHub repo from within the JupyterLab UI.
      5. P1: The system must support the ability to edit and run Python scripts from within the JupyterLab interface. 
      6. P1: TBD: The system must support the ability to start a training pipeline from the JupyterLab UI.
      7. P1: TBD: The system must support the ability to initiate a serving pipeline based on a code commit in GitHub.
      8. P0: The system must provide detailed documentation to guide users through the model serving workflow.
      9. P2: The system must support the ability to enable a model in a notebook to be served without requiring users to manually write extraneous (i.e. not core to the data science functions) or leave the RHODS workspace.
      10. P2: The system must support the ability to deploy a model as a service without requiring users to leave the RHODS workspace.
      11. P2: The system must support the ability to serve model that is stored in a pickle format. The pickled model could be stored in a persistent volume (PV), S3, or a GitHub repository. Note: access to S3 or GitHub assumes the notebook server is connected to these locations.

      Model serving flow: https://docs.google.com/presentation/d/1yacnsSfqLaL9oW7QjJxn7fzc-FJ8gHibujDB1uuXeQA/edit#slide=id.gbe45568141_0_148

       

      Test cases:

      • deploy model on node with GPUs and verify GPUs utilized to speed up inference. 

       

      Considerations/questions:

      • need to make the workflow for deploying a model as a service as easy as possible for Data Science users. This will be a key value proposition for RHODS.
      • Need to determine if we include additional capabilities (eg. Seldon Core) to enhance serving capabilities. The underlying technology decisions should be part of this epic.

            jkoehler@redhat.com Jacqueline Koehler
            jdemoss@redhat.com Jeff DeMoss
            Luca Giorgi Luca Giorgi
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: