Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5014

DateTimeParseException: Text 'infinity' could not be parsed in Postgres connector

    XMLWordPrintable

Details

    • False
    • None
    • False
    • Hide

      We are not yet able to create a simplistic, local example and have not yet narrowed down any conditions.  Will update if we find anything.

      Show
      We are not yet able to create a simplistic, local example and have not yet narrowed down any conditions.  Will update if we find anything.

    Description

      Bug report

      What Debezium connector do you use and what version?

      Postgres 1.9.0.Final

      What is the connector configuration?

       

      {
        "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
        "tasks.max": "1",
        "database.hostname": "HOSTNAME.us-east-1.rds.amazonaws.com",
        "database.port": "5432",
        "database.user": "${USERNAME}",
        "database.password": "${PASSWORD}",
        "database.dbname" : "DBNAME",
        "database.server.name": "audit-log-prod-test",
        "decimal.handling.mode": "string",
        "plugin.name": "pgoutput",
        "slot.name" : "debezium""schema.exclude.list": "journal",
        "table.exclude.list": "public.audit_log_heartbeat",
        "column.exclude.list": "public.user.password""heartbeat.action.query": "INSERT INTO audit_log_heartbeat (id, ts) VALUES (1, NOW()) ON CONFLICT(id) DO UPDATE SET ts=EXCLUDED.ts;",
        "heartbeat.interval.ms": 60000,  
      
        "snapshot.mode": "never""topic.creation.default.replication.factor": 3,
        "topic.creation.default.partitions": 1
      } 

       

       

       

      What is the captured database version and mode of depoyment?

      AWS RDS Postgres 
      Engine version: 11.7

      What behaviour do you expect?

      No exceptions.  Debezium continues to operate.

      What behaviour do you see?

       

      2022-04-19 20:52:51 ERROR Cannot parse time/date value 'infinity', expected format 'y..y-MM-dd[ GG]'
      2022-04-19 20:52:51 ERROR Producer failure  
      
      ...
      
      2022-04-19 20:52:51 ERROR WorkerSourceTask{id=xxxx-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted
      org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped. 
              at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:50)
              at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:172)
              at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:41)
              at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:174)
              at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:141)
              at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:109)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: org.apache.kafka.connect.errors.ConnectException: Error while processing event at offset {transaction_id=null, lsn_proc=79619438026376, lsn_commit=79619438056848, lsn=79619438026376, txId=315899941, ts_usec=1650401570841182}
              at io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:254)
              at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.lambda$processMessages$0(PostgresStreamingChangeEventSource.java:267)
              at io.debezium.connector.postgresql.connection.pgoutput.PgOutputMessageDecoder.decodeDelete(PgOutputMessageDecoder.java:498)
              at io.debezium.connector.postgresql.connection.pgoutput.PgOutputMessageDecoder.processNotEmptyMessage(PgOutputMessageDecoder.java:197)
              at io.debezium.connector.postgresql.connection.AbstractMessageDecoder.processMessage(AbstractMessageDecoder.java:33)
              at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:508)
              at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:500)
              at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.processMessages(PostgresStreamingChangeEventSource.java:207)
              at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:169)
              ... 9 more
      Caused by: org.apache.kafka.connect.errors.ConnectException: java.time.format.DateTimeParseException: Text 'infinity' could not be parsed at index 0
              at io.debezium.connector.postgresql.connection.wal2json.DateTimeFormat$ISODateTimeFormat.format(DateTimeFormat.java:154)
              at io.debezium.connector.postgresql.connection.wal2json.DateTimeFormat$ISODateTimeFormat.date(DateTimeFormat.java:135)
              at io.debezium.connector.postgresql.connection.AbstractColumnValue.asLocalDate(AbstractColumnValue.java:44)
              at io.debezium.connector.postgresql.connection.pgoutput.PgOutputReplicationMessage.getValue(PgOutputReplicationMessage.java:97)
              at io.debezium.connector.postgresql.connection.pgoutput.PgOutputMessageDecoder$1.getValue(PgOutputMessageDecoder.java:733)
              at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.columnValues(PostgresChangeRecordEmitter.java:184)
              at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.getOldColumnValues(PostgresChangeRecordEmitter.java:114)
              at io.debezium.relational.RelationalChangeRecordEmitter.emitDeleteRecord(RelationalChangeRecordEmitter.java:133)
              at io.debezium.relational.RelationalChangeRecordEmitter.emitChangeRecords(RelationalChangeRecordEmitter.java:56)
              at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.emitChangeRecords(PostgresChangeRecordEmitter.java:96)
              at io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:217)
              ... 17 more
      Caused by: java.time.format.DateTimeParseException: Text 'infinity' could not be parsed at index 0
              at java.base/java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:2046)
              at java.base/java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1948)
              at java.base/java.time.LocalDate.parse(LocalDate.java:428)
              at io.debezium.connector.postgresql.connection.wal2json.DateTimeFormat$ISODateTimeFormat.lambda$date$0(DateTimeFormat.java:135) 2022-04-19 20:52:51 INFO  Connection gracefully closed
      2022-04-19 20:52:51 INFO  Connection gracefully closed
      2022-04-19 20:52:51 INFO  Finished streaming
      
      ...
      
      2022-04-19 20:52:51 INFO  [Producer clientId=connector-producer-XXXX-0] Closing the Kafka producer with timeoutMillis = 30000 ms.
      2022-04-19 20:52:51 INFO  Metrics scheduler closed
      2022-04-19 20:52:51 INFO  Closing reporter org.apache.kafka.common.metrics.JmxReporter

       
      As can be seen from the stacktrace, the execution path is entirely within Debezium's codebase. We only collected 3 samples thus far. All of them involved `DELETE` operations. You can see references to `emitDeleteRecord(RelationalChangeRecordEmitter.java:133)` and `decodeDelete(PgOutputMessageDecoder.java:498)` in the stacktrace.
       

      Do you see the same behaviour using the latest released Debezium version?

      Yes.  1.9.0.Final is the latest at this time.

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      Yes.  We can attempt to collect new samples. 

      How to reproduce the issue using our tutorial deployment?

      <Your answer>

       

      Additional context

      Debezium is deployed in a streaming-only (no snapshot) mode. The exception was encountered during a pilot test run of monitoring an actual production database (for stress testing + to collect a lot of samples and scenarios). It hasn't been easy for us to replicate the conditions locally.

      Almost every table has `REPLICA IDENTITY FULL`

      Debezium metrics are attempted to be collected via Jolokia

      Seems related to DBZ-2614, DBZ-2450, DBZ-2844
      Seems related to this post in the wild.

      What is also confusing is that the docs states:

      PostgreSQL supports using +/-infinite values in TIMESTAMP columns. These special values are converted to timestamps with value 9223372036825200000 in case of positive infinity or -9223372036832400000 in case of negative infinity. This behaviour mimics the standard behaviour of PostgreSQL JDBC driver - see org.postgresql.PGStatement interface for reference.

      We have not yet tried to re-collect samples with 1.8.1.Final. But given the handling of `infinity` values has been an issue in the past (like 1.5.0) we aren't optimistic that that its merely a new phenomenon that downgrading will help with.

       

      Summary

      We believe this is a significant problem.  A Postgres-allowed value can effectively kill Debezium.

      Attachments

        Activity

          People

            Unassigned Unassigned
            bplies Bradley Plies (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: