Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-6228

Publish of sync event fails when message becomes very large.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.2.0.Beta1
    • 2.2.0.Alpha1
    • spanner-connector
    • None
    • False
    • None
    • False
    • Critical

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      Spanner - v2.2.0Alpha1

      What is the connector configuration?

      {"name": "livestream-connector","config":

      { "connector.class": "io.debezium.connector.spanner.SpannerConnector", "tasks.max": "1", "gcp.spanner.change.stream": "live-table", "gcp.spanner.project.id": "project-stag, "gcp.spanner.instance.id": "development-instance", "gcp.spanner.database.id": "live-db", "gcp.spanner.credentials.path": "/app/services-sa.json", "key.converter.schemas.enable":false, "value.converter.schemas.enable":false }

      }

      What is the captured database version and mode of depoyment?

      GCP Managed Spanner DB and Kafka connect deployed on K8s.

      What behaviour do you expect?

      The publish should not fail this publish failure non recoverable and task is not able to start.

      What behaviour do you see?

      The publish here fails. Because message size is too large.

       

      It throws error.

      Similar to this.

      https://stackoverflow.com/questions/55181375/org-apache-kafka-common-errors-recordtoolargeexception-the-request-included-a-m

       

      The max message size on our kafka server is 1MB.

       

      ERROR Task failure, taskUid: cdc-platform-livestream_member_prod_task-3_a2cd360e-24da-4e4d-9375-0f885a1374f2, io.debezium.connector.spanner.exception.SpannerConnectorException: Error during publishing to the Sync Topic

      at io.debezium.connector.spanner.kafka.internal.TaskSyncPublisher.publishSyncEvent(TaskSyncPublisher.java:95)

      at io.debezium.connector.spanner.kafka.internal.TaskSyncPublisher.send(TaskSyncPublisher.java:63)

      at io.debezium.connector.spanner.task.RebalanceHandler.process(RebalanceHandler.java:75)

      at io.debezium.connector.spanner.task.SynchronizationTaskContext.lambda$init$0(SynchronizationTaskContext.java:185)

      at io.debezium.connector.spanner.kafka.internal.RebalancingEventListener$1.lambda$onPartitionsAssigned$0(RebalancingEventListener.java:91)

      at io.debezium.connector.spanner.task.utils.ResettableDelayedAction.lambda$set$0(ResettableDelayedAction.java:36)

      at java.base/java.lang.Thread.run(Thread.java:829)

      Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.

      at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:97)

      at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:65)

      at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)

      at io.debezium.connector.spanner.kafka.internal.TaskSyncPublisher.publishSyncEvent(TaskSyncPublisher.java:80)

      ... 6 more

      Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.

      Do you see the same behaviour using the latest relesead Debezium version?

      Yes

      Do you have the connector logs, ideally from start till finish?\

      Yes.

      How to reproduce the issue using our tutorial deployment?

      This is reproducible when message size of sync event is very large. Increase default message size.

      max.message.bytes

       

      Implementation ideas (optional)

      Remove/Filter finished partitions before sending sync event.

              nancyxu567@gmail.com Nancy Xu (Inactive)
              shantanu-sharechat Shantanu Sharma (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: