-
Epic
-
Resolution: Done
-
Critical
-
None
-
Kafka integration
-
False
-
False
-
No
-
To Do
-
Undefined
-
No
-
Pending
-
None
-
Data Science users need to be able to utilize an integration to the Managed Kafka service to handle use cases related to streaming data. For example, I may want to build and/or test a model on streaming data. Additionally, I may want to run inference using a published model against streaming data.
Requirements:
- P0: The system must support the ability to use Kafka streaming data within notebooks to perform functions such as building and testing a model. Note: this assumes the notebook server environment is connected to the Kafka managed service.
- P1: The system must support the ability to write data to topics on the Kafka managed service.
- P0: The system must support the ability to use streaming data from the Kafka managed service in a published model. Note: this needs to be supported both with (when service is available) and without Seldon for model publishing.
Considerations/questions:
- In addition to Kafka endpoint, the user will need the Kafka topic name and Kafka authentication information. The Kafka endpoint should be automatically populated in an environment variable as part of the notebook server creation flow. The endpoint should also be available as a reference from the RHODS dashboard.
- clones
-
RHODS-164 9. Kafka integration
- Closed