• Icon: Sub-task Sub-task
    • Resolution: Done
    • Icon: Normal Normal
    • Consoledot CY24Q4
    • None
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Unset
    • CRCPLAN-232 - AuthZ | PRBAC v2 Service Provider Migration Initiation (Internal)
    • A&M Tech Debt Q10, Access & Management Sprint 95, Access & Management Sprint 95, Access & Management Sprint 96, Access & Management Sprint 97, Access & Management Sprint 98, Access & Management Sprint 99

      In RHCLOUD-34856 a sink connector was created and tested with a docker-compose config. This task is to create an OpenShift deployment config and test it on the most convenient environment, i.e. ephemeral or stage.

      Review the work done for the debezium connector in RHCLOUD-34507: "Debezium deployment for ephemeral/stage" before proceeding, because there will be a lot of overlap.

      The sink connector deployment config will differ in that it will refer to a custom plugin, i.e. the build of https://github.com/project-kessel/kafka-relations-sink deployed in maven central (or internally within Red Hat infra). RHCLOUD-35567 is a pre-requisite for this Jira, to ensure a deployed artifact is available at a url.

      Assuming we will use strimzi, a good place to start is this KafkaConnect template: https://strimzi.io/docs/operators/in-development/deploying#creating-new-image-using-kafka-connect-build-str 

      Considerations:

      Definition of done:

      1. The kafka sink connector is deployable and runnable on ephemeral or stage, as appropriate for ease of testing.
      2. It is configured to connect to the relations-api.
      3. Connector is tested by sending a message on the correct topic using a command like (with bootstrap url corrected)
        echo '{"schema":{"type":"string","optional":true,"name":"io.debezium.data.Json","version":1},"payload":"{\"relations_to_add\": [{\"subject\": {\"subject\": {\"id\": \"my_workspace_2\", \"type\": {\"name\": \"workspace\", \"namespace\": \"rbac\"}}}, \"relation\": \"workspace\", \"resource\": {\"id\": \"my_integration\", \"type\": {\"name\": \"integration\", \"namespace\": \"notifications\"}}}], \"relations_to_delete\": []}"}' | bin/kafka-console-producer.sh --bootstrap-server kafka:9092 --topic outbox.event.RelationReplicationEvent 

        and checking spicedb to ensure the relation is create. i.e. 

        $ zed relationship read notifications/integration
        notifications/integration:my_integration t_workspace rbac/workspace:my_workspace_2 

              anatale.openshift Antony Natale
              mmclaugh@redhat.com Mark McLaughlin
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: