Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5255

Debezium Postgres Connector Incorrectly ignoring LSNs and losing data

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Major Major
    • None
    • 1.5.2.Final
    • postgresql-connector
    • None
    • False
    • None
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      1.5.2.Final

      What is the connector configuration?

      {
                      "binary.handling.mode": "base64",
                      "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
                      "database.dbname": "foo_prod",
                      "database.hostname": "foo-db.example.com",
                      "database.password": "${file:/etc/kafka/secrets/connect.properties:PG_PASSWORD}",
                      "database.port": "5432",
                      "database.server.name": "foo_prod",
                      "database.user": "${file:/etc/kafka/secrets/connect.properties:PG_USERNAME}",
                      "datatype.propagate.source.type": ".+\\.numeric,.+\\.bytea",
                      "decimal.handling.mode": "string",
                      "heartbeat.action.query": "INSERT INTO logical_ticker.tick (tick_time) VALUES (now()) ON CONFLICT (db) DO UPDATE SET tick_time = now();",
                      "heartbeat.interval.ms": "10000",
                      "include.unknown.datatypes": "true",
                      "key.converter": "io.confluent.connect.avro.AvroConverter",
                      "key.converter.basic.auth.credentials.source": "USER_INFO",
                      "key.converter.basic.auth.user.info": "${file:/etc/kafka/secrets/connect.properties:SR_USERNAME}:${file:/etc/kafka/secrets/connect.properties:SR_PASSWORD}",
                      "key.converter.schema.registry.url": "https://cluster.region.aws.confluent.cloud",
                      "name": "foo_prod:debezium_postgres",
                      "plugin.name": "pgoutput",
                      "snapshot.mode": "exported",
                      "table.include.list": ".....",
                      "tasks.max": "1",
                      "toasted.value.placeholder": "__debezium_unchanged_toast_value",
                      "value.converter": "io.confluent.connect.avro.AvroConverter",
                      "value.converter.basic.auth.credentials.source": "USER_INFO",
                      "value.converter.basic.auth.user.info": "${file:/etc/kafka/secrets/connect.properties:SR_USERNAME}:${file:/etc/kafka/secrets/connect.properties:SR_PASSWORD}",
                      "value.converter.schema.registry.url": "https://cluster.region.aws.confluent.cloud"
      } 

      What is the captured database version and mode of depoyment?

      Vanilla postgres 11.15 on-prem.

      What behaviour do you expect?

      Data published to Kafka continuously from publisher

      What behaviour do you see?

      A large gap of data missing after incident.

      Do you see the same behaviour using the latest relesead Debezium version?

      I haven't tried, but I cannot reproduce the issue and it's the first we have seen it in several years for any of our connectors.

      Do you have the connector logs, ideally from start till finish?

      Attached.  First file shows all logs of exception and connector restart, the second file shows the constant messages about LSNs being ignored that followed.

      How to reproduce the issue using our tutorial deployment?

      N/A

        1. issue_20220610_connector_log.csv
          65 kB
          Jeremy Finzel
        2. issue_20220610_connector_log2.csv
          47 kB
          Jeremy Finzel

              Unassigned Unassigned
              jfinzel Jeremy Finzel (Inactive)
              Votes:
              2 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: