We know that customers want to be able to “track things” in/about their experiments/pipelines, but we don't have a good feel for what those things are. The goal of this task would be to clearly document a user scenario around this that we can then start to build towards.
It may be something like “I want to register metadata about the data souce that I used to train the model in this experiment. I want to register key performance/accuracy metrics about the model that was trained in this eperiment.”
We should get feedback from IBM, RHODS field, BU teams on what we come up with for this.
Acceptance criteria:
A well formulated user story with an ack from jdemoss@redhat.com.