Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5282

Debezium is not working with apicurio and custom truststores

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.2.0.Alpha1
    • 1.9.0.Final, 1.9.2.Final, 1.9.3.Final, 1.9.4.Final
    • postgresql-connector
    • None
    • False
    • None
    • False

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      1.9

      What is the connector configuration?

      {
      "name": "inventory-connector",
      "config":

      { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "tasks.max": "1", "database.hostname": "postgres", "database.port": "5432", "database.user": "postgres", "database.password": "postgres", "database.dbname" : "postgres", "database.server.name": "dbserver1", "schema.include.list": "inventory", "key.converter.schemas.enable": "false", "value.converter": "io.apicurio.registry.utils.converter.ExtJsonConverter", "value.converter.apicurio.registry.url": "http://apicurio:8080/apis/registry/v2", "value.converter.apicurio.registry.auto-register": "true", "value.converter.apicurio.registry.artifact.group-id": "dummy", "value.converter.apicurio.registry.request.ssl.truststore.location": "TroubleMaker" }

      }

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      on-premises postgres

      What behaviour do you expect?

      Use apicurio as schema registry. Did work in version 1.8

      What behaviour do you see?

      We get an error every time the connector connects to apicurio due to an invalid http header.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      yes

      Do you have the connector logs, ideally from start till finish?

      org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:329)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:355)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
      at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
      at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
      at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
      at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
      at java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: java.lang.IllegalArgumentException: invalid header name: \"\"
      at java.net.http/jdk.internal.net.http.common.Utils.newIAE(Utils.java:280)
      at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.checkNameAndValue(HttpRequestBuilderImpl.java:107)
      at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.header(HttpRequestBuilderImpl.java:126)
      at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.header(HttpRequestBuilderImpl.java:43)
      at java.base/java.util.HashMap.forEach(HashMap.java:1337)
      at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:153)
      at io.apicurio.registry.rest.client.impl.RegistryClientImpl.createArtifact(RegistryClientImpl.java:236)
      at io.apicurio.registry.rest.client.RegistryClient.createArtifact(RegistryClient.java:139)
      at io.apicurio.registry.serde.DefaultSchemaResolver.lambda$handleAutoCreateArtifact$2(DefaultSchemaResolver.java:174)
      at io.apicurio.registry.serde.ERCache.lambda$getValue$0(ERCache.java:132)
      at io.apicurio.registry.serde.ERCache.retry(ERCache.java:171)
      at io.apicurio.registry.serde.ERCache.getValue(ERCache.java:131)
      at io.apicurio.registry.serde.ERCache.getByContent(ERCache.java:116)
      at io.apicurio.registry.serde.DefaultSchemaResolver.handleAutoCreateArtifact(DefaultSchemaResolver.java:172)
      at io.apicurio.registry.serde.DefaultSchemaResolver.resolveSchema(DefaultSchemaResolver.java:82)
      at io.apicurio.registry.utils.converter.ExtJsonConverter.fromConnectData(ExtJsonConverter.java:97)
      at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$3(WorkerSourceTask.java:329)
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
      at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)\n\t... 11 more\n

      How to reproduce the issue using our tutorial deployment?

      The problem can be easily reproduced by using the connector config provided. The key addition to the tutorial examples are:

      Connector:

      "value.converter": "io.apicurio.registry.utils.converter.ExtJsonConverter",
      "value.converter.apicurio.registry.url": "http://apicurio:8080/apis/registry/v2",
      "value.converter.apicurio.registry.auto-register": "true",
      "value.converter.apicurio.registry.artifact.group-id": "dummy",
      "value.converter.apicurio.registry.request.ssl.truststore.location": "TroubleMaker"

       

      Docker-Compose:

      apicurio:
      image: apicurio/apicurio-registry-mem:2.0.0.Final
      ports:

        • 8080:8080*
          connect:
          image: quay.io/debezium/connect:1.9
          ports:
      • 8083:8083
        links:
      • kafka
      • postgres
        environment:
      • BOOTSTRAP_SERVERS=kafka:9092
      • GROUP_ID=1
      • CONFIG_STORAGE_TOPIC=my_connect_configs
      • OFFSET_STORAGE_TOPIC=my_connect_offsets
      • STATUS_STORAGE_TOPIC=my_connect_statuses
        • ENABLE_APICURIO_CONVERTERS=true*

       

      The RegistryClientFactory has a method to replace provided config parameters (e.g. value.converter.apicurio.registry.request.ssl.truststore.location). A defined set of keys are replaced by the static value "apicurio.rest.request.headers.".
      https://github.com/Apicurio/apicurio-registry/blob/master/client/src/main/java/io/apicurio/registry/rest/client/RegistryClientFactory.java

      The name of the replacement suggests that the config keys should be prefixed instead of replaced.

      Those replaced instead of prefixed config keys are later mapped again in io.apicurio.rest.client.JdkHttpClient. This time the the value "apicurio.rest.request.headers." is replaced by "".
      As result the default header fields contain an invalid entry.
      https://github.com/Apicurio/apicurio-common-rest-client/blob/main/rest-client-jdk/src/main/java/io/apicurio/rest/client/JdkHttpClient.java

      Nevertheless I assume it is not intended to send truststore locations including passwords as http headers.

              Unassigned Unassigned
              t.roeseler@eos-ts.com Timo Roeseler (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: