Uploaded image for project: 'Red Hat OpenShift Data Science'
  1. Red Hat OpenShift Data Science
  2. RHODS-3591

As a QE, I want to automate a TC to execute a sample Jupyter notebook for pachyderm

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Won't Do
    • Icon: Normal Normal
    • None
    • None
    • None
    • None
    • 2
    • False
    • None
    • False
    • pm-ack
      1. the test case runs the entire sample jupyter notebook
      2. the teardown deletes the file in the S3 bucket (without deleting the bucket itself)
    • No
    • RHODS-7452 - Test Automation Backlog
    • No
    • No
    • Pending
    • None
    • RHOSi 1.11

      in RHODS-3332 we have run a jupyter notebook to create a pachyderm pipeline. Now we want to run the entire jupyter notebook to check everything is working fine. 

      This will add cover for https://polarion.engineering.redhat.com/polarion/#/project/OpenDataHub/workitem?id=ODS-1026 

      Attention

      It would be nice to have a teardown keyword to delete the files created on AWS S3 by the jupyter notebook. Do Not delete the S3 bucket but only the files. Refer to AWS documentation or try to use boto3 python library

       

      Note: the sample jupyter notebook we used for RHODS-3332 may fail if some cells are run before the pipeline pod is ready and running, so you may need to add some sleep at a certain point. 

              rhn-support-nchopra Neha Chopra
              rhn-support-bdattoma Berto D'Attoma
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: