Uploaded image for project: 'AMQ Streams Flink'
  1. AMQ Streams Flink
  2. ENTMQSTFL-32

Investigation: Investigate and decide how users provide SQL statements and Kafka credentials to the SqlJob.jar

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Undefined Undefined
    • None
    • None
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • False

      When a user is creating a Flink Job they need to submit their SQL statements to the SqlJob.jar somehow. These statements will likely include table commands using the Kafka connector to pull/push records to/from one or more Kafka topics. However the DDL statements contains security credentials for connecting to Kafka Cluster.

      There are multiple ways to pass the statements:

      • Indirectly given via an environment variable (e.g. --fromEnvVar=MY_DDL_VAR). This is intended (and will be documented as being) for DDL provided via a Secret configured in the pod template in the FlinkDeployment CR.
      • Indirectly given via a file in the container filesystem (e.g. --fromFile=/path/to/my/file). This is intended (and will be documented as being) for DDL provided via a Secret mount configured in the pod template in the FlinkDeployment CR.
      • Indirectly given a named Secret resource accessible from the Kube API server (e.g. --fromSecret=my-secret/my-key). This is intended (and will be documented as being) for DDL provided via a Secret. This option is better suited for “session mode” using FlinkSessionJob, where it’s not known at the time the FlinkDeployment is created which secrets need to be mounted.
      • Directly given as an argument (e.g. --stmts=”SELECT …”). This is intended (and will be documented as being) for DQL and DML statements.

      We need to determine which option(s) we will offer to users.

      If we assume that the statements will be passed as arguments (e.g. --stmts=”SELECT …”) for DQL and DML statements, within this argument we could also provide DDL statements with templating for credentials.

      There are two options on how the credentials can be provided for DDL:

      • The SqlJob can be Kube-aware and a client of the API server. Secret resource can be templated in the statements e.g. {{secret:namespace/name/key{}}} and the SqlJob will interpolate it by retrieving the specified secret. 
      • The user has to mount secrets via the pod template, in which case the interpolatable thing is more like {{file:/path/to/secret/mount{}}}. 

      The first option is better suited for "session mode" using FlinkSessionJob, where it’s not known at the time the FlinkDeployment is created which secrets need to be mounted. 

      To mark this task as complete we need to review the different approaches and decide what options we will allow users to do.

              rh-ee-gselenge Gantigmaa Selenge
              rh-ee-gselenge Gantigmaa Selenge
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: