Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-2687

Add FAQ paragraph on Debezium container image configuration with ENV vars

    XMLWordPrintable

Details

    • Bug
    • Resolution: Not a Bug
    • Minor
    • None
    • None
    • debezium-server
    • None
    • False
    • False
    • Undefined
    • Hide
      1. Set up SQL database to use as source.
      2. Create a table with at least one NVARCHAR(MAX) column, and enable CDC capture on it.
      3. Set up a connector of class 'io.debezium.connector.sqlserver.SqlServerConnector', pointing it at the DB from step 1, and whitelist the table from step 2.
      4. Insert a record whose serialized representation will be greater than 1MB on disk.

       

      Actual: The connector fails due to the message size being larger than the max.request.size.

      Expected: I would like to be able to set the max.request.size on the producer in order to solve the issue.

      Show
      Set up SQL database to use as source. Create a table with at least one NVARCHAR(MAX) column, and enable CDC capture on it. Set up a connector of class 'io.debezium.connector.sqlserver.SqlServerConnector', pointing it at the DB from step 1, and whitelist the table from step 2. Insert a record whose serialized representation will be greater than 1MB on disk.   Actual: The connector fails due to the message size being larger than the max.request.size. Expected: I would like to be able to set the max.request.size on the producer in order to solve the issue.

    Description

      I am using the Debezium SQL Server Sink Connector to stream changes from an output table in my database.

      Some records in this output table can be very large; they will overflow the default Kafka producer max.request.size of 1MB. When the Debezium sink connector encounters such a record, I see this error:

      org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
      	at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:265)
      	at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:319)
      	at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:247)
      	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
      	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 2202897 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

      I need to be able to set the max.request.size property on the change data producer. Reading the documentation, it seems that there are pass-through config options available on the database history producer: https://debezium.io/documentation/reference/1.3/connectors/sqlserver.html#sqlserver-connector-properties

      These are specified by the prefix 'database.history.producer.'. However, setting this does not solve the issue. I suspect this is because that setting is only being used for the producer which writes to the dbhistory change topic, and not the actual change data topics.

      Is there a way to set max.request.size on this producer using the current config?

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ngprice Nick Price (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: