Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-7920

Debezium postgres jdbc sink not handling infinity values

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 3.0.0.Alpha1
    • None
    • jdbc-connector
    • None

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      2.6.2.Final

      What is the connector configuration?

       

      {    "batch.size": "2048",    "connection.password": "password",    "connection.url": "jdbc:postgresql://dbhost:6432/dbname?ApplicationName=appname",    "connection.username": "username",    "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector",    "dialect.name": "PostgreSqlDatabaseDialect",    "hibernate.c3p0.maxConnectionAge": "120",    "hibernate.c3p0.timeout": "60",    "insert.mode": "upsert",    "key.converter": "io.confluent.connect.json.JsonSchemaConverter",    "key.converter.schema.registry.url": "schema-registry-host",    "key.converter.schemas.enable": "false",    "name": "db-schema-table-01",    "poll.interval.ms": "300",    "primary.key.fields": "id",    "primary.key.mode": "record_key",    "quote.sql.identifiers": "NEVER",    "table.name.format": "schema.table",    "table.types": "TABLE",    "tasks.max": "1",    "topics": "topic.name",    "value.converter": "io.confluent.connect.json.JsonSchemaConverter",    "value.converter.schema.registry.url": "schema-registry-host",    "value.converter.schemas.enable": "false",    "value.converter.schemas.infer.enable": "true"} 

       

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      Databases in use are PostgreSQL 16.2 both on source and destination side. Deployment is on-premises, no special cloud provider features.

      What behaviour do you expect?

      Connector should be able to parse "infinity" and "-infinity" properly and write values to table. Issue is observed with timestamp with time zone values, but keep in mind, infinity is used for several data types including ranges, We suspect it will fail with same error for any of them.

      Excerpt from the message:

      {    "before":NULL,    "after"{...,            "starttime": "2024-04-11T10:34:00.000000Z",            "endtime": "infinity",            ...    },    "source": {        "version": "2.2.1.Final",        "connector": "postgresql",        ...    }"op": "c",    "ts_ms": 1717411223710,    "transaction":NULL} 

       

      What behaviour do you see?

      Sink connector fails with provided error.

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      We haven't tried Alpha/Beta/CR version.

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

       

      org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
      	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:632)
      	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:350)
      	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:250)
      	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:219)
      	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
      	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
      	at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:236)
      	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
      	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
      	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
      	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
      	at java.base/java.lang.Thread.run(Thread.java:840)
      Caused by: org.apache.kafka.connect.errors.ConnectException: JDBC sink connector failure
      	at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:96)
      	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
      	... 11 more
      Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record
      	at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffer(JdbcChangeEventSink.java:210)
      	at io.debezium.connector.jdbc.JdbcChangeEventSink.lambda$flushBuffers$2(JdbcChangeEventSink.java:188)
      	at java.base/java.util.HashMap.forEach(HashMap.java:1421)
      	at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffers(JdbcChangeEventSink.java:188)
      	at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:149)
      	at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:103)
      	... 12 more
      Caused by: java.time.format.DateTimeParseException: Text 'infinity' could not be parsed at index 0
      	at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2052)
      	at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1954)
      	at java.base/java.time.ZonedDateTime.parse(ZonedDateTime.java:600)
      	at io.debezium.connector.jdbc.type.debezium.ZonedTimestampType.bind(ZonedTimestampType.java:48)
      	at io.debezium.connector.jdbc.SinkRecordDescriptor$FieldDescriptor.bind(SinkRecordDescriptor.java:247)
      	at io.debezium.connector.jdbc.dialect.GeneralDatabaseDialect.bindValue(GeneralDatabaseDialect.java:417)
      	at io.debezium.connector.jdbc.RecordWriter.bindFieldValuesToQuery(RecordWriter.java:156)
      	at io.debezium.connector.jdbc.RecordWriter.bindNonKeyValuesToQuery(RecordWriter.java:141)
      	at io.debezium.connector.jdbc.RecordWriter.bindValues(RecordWriter.java:115)
      	at io.debezium.connector.jdbc.RecordWriter.lambda$processBatch$0(RecordWriter.java:75)
      	at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:37)
      	at org.hibernate.internal.AbstractSharedSessionContract.lambda$doWork$4(AbstractSharedSessionContract.java:966)
      	at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:303)
      	at org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:977)
      	at org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:965)
      	at io.debezium.connector.jdbc.RecordWriter.write(RecordWriter.java:51)
      	at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffer(JdbcChangeEventSink.java:203)
      	... 17 more

       

      How to reproduce the issue using our tutorial deployment?

      To reproduce the issue, it is sufficient to have a message with "infinity" value for timestamptz field and attempt to sink it to a table.

            rh-ee-mvitale Mario Fiore Vitale
            miloseskert Miloš Eškert (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: