XMLWordPrintable

    • Kafka integration
    • False
    • False
    • No
    • To Do
    • 100
    • 100% 100%
    • Undefined
    • No

      Data Science users need to be able to utilize an integration to the Managed Kafka service to handle use cases related to streaming data. For example, I may want to build and/or test a model on streaming data. Additionally, I may want to run inference using a published model against streaming data. 

      Requirements:

      1. P0: The system must support the ability to use Kafka streaming data within notebooks to perform functions such as building and testing a model. Note: this assumes the notebook server environment is connected to the Kafka managed service.
      2. P1: The system must support the ability to write data to topics on the Kafka managed service.
      3. P0: The system must support the ability to use streaming data from the Kafka managed service in a published model. Note: this needs to be supported both with (when service is available) and without Seldon for model publishing.

       

      Considerations/questions:

      - In addition to Kafka endpoint, the user will need the Kafka topic name and Kafka authentication information. The Kafka endpoint should be automatically populated in an environment variable as part of the notebook server creation flow. The endpoint should also be available as a reference from the RHODS dashboard.

            Unassigned Unassigned
            jdemoss@redhat.com Jeff DeMoss
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: