-
Epic
-
Resolution: Done
-
Normal
-
None
-
Notebook server operations
-
False
-
False
-
No
-
-
To Do
-
0% To Do, 0% In Progress, 100% Done
-
Undefined
-
No
-
Data Science users can perform the primary functions of data preparation and model development within Jupyter notebooks. This epic covers requirements for standard notebook capabilities and operations.
Requirements:
- P0: The system must support the ability to launch JupyterLab for notebook creation and access to existing notebooks and other files within a notebook server environment.
- P0: New notebook servers must include appropriate packages and libraries based on the selected notebook image. Note: the images and packages are defined in the 'Support notebook images' epic.
- P0: The system must support the ability to import a new notebook file from a local device.
P2: The system must support the ability to import a new notebook file from a specified URL.- P0: The system must support the ability to build models using tools based on the notebook image associated with the notebook server. For example, if the server is using a Tensorflow GPU image, the notebook must be able to build models using Tensorflow and utilize GPUs for compute-intensive processes.
- P0: Notebooks must be able to utilize environment variables defined as part of the notebook server configuration. This includes access to data (eg. in S3) and the use of services (eg. Managed Kafka or ISV services).
- P1: The system must support the ability for multiple users with access to a notebook server to access the same data in S3. Note: This assume the notebook server environment is connected to a S3 account. Note: this is for access to different notebook servers accessing to same data.
- P0: The system must support the ability to bring in an existing model into a new notebook.
- P1: The system must provide detailed error messages (for users)
with information on how to resolve issue (eg. insufficient memory resources - what should the Data Science user do?).
Test cases:
- Verify that you can use libraries for auto installed packages.
- Ability to query & visualize data in S3.
- Ability to create new datasets in S3 (eg new columns, filter columns, remove rows)
- Split data into training & validate.
- Build models using Tensorflow and Pytorch.
- Pytorch and Tensorflow performance with data in S3
- Train model in notebook server with GPUs and verify GPUs utilized in training.
- Test/validate model.
- Multiple users with access to a notebook server can access the same data in S3 - single access credentials in environment variables should enable this.
- is cloned by
-
RHODS-1346 FUTURE GA: Notebook server operations
- New