Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-1335

Setting "include.unknown.datatypes" to true works for streaming but not during snapshot

    XMLWordPrintable

Details

    • Hide
      • enable ltree extension int the db
      • create a table with ltree array column
      • insert a row with values in the ltree array column
      • create a dbz connector with "include.unknown.datatypes" set to true and allow the snapshot to complete
      • insert another row in the table with values in the ltree array column
      • read the dbz output topic and find the messages corresponding to the two table rows just created.

      The one added during the snapshot has value null while the other has a non-null value.

      Show
      enable ltree extension int the db create a table with ltree array column insert a row with values in the ltree array column create a dbz connector with "include.unknown.datatypes" set to true and allow the snapshot to complete insert another row in the table with values in the ltree array column read the dbz output topic and find the messages corresponding to the two table rows just created. The one added during the snapshot has value null while the other has a non-null value.

    Description

      Setting "include.unknown.datatypes" option to true works for streaming (base64 encoded raw value is returned), but it doesn't seem to work during snapshot. The column type in my case is ltree array. I see WARN Postgres|postgres1|records-snapshot-producer Unexpected JDBC BINARY value for field ancestor_ids with schema Schema

      {BYTES}

      : class=class java.util.Arrays$ArrayList, value=... [io.debezium.connector.postgresql.PostgresValueConverter] and null value is returned

      The approximate table definition looks like this

      CREATE TABLE item (
        id                       UUID PRIMARY KEY,
        ancestor_ids             extensions.LTREE ARRAY,
        -- "extensions" is the schema name where ltree plugin is enabled
        -- more fields follow
      );
      

      the connector definition is the following

      {
        "name": "postgres1-public-connector",
        "config": {
          "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
          "tasks.max": "1",
          "database.hostname": "dbz-postgres",
          "database.port": "5432",
          "database.user": "postgres",
          "database.password": "...",
          "database.dbname": "postgres",
          "database.server.name": "postgres1",
          "database.whitelist": "postgres",
          "tombstones.on.delete": "false",
          "schema.whitelist": "cms_\\w+|cms",
          "database.history.kafka.bootstrap.servers": "dbz-kafka:9092",
          "database.history.kafka.topic": "postgres1.cms.schema-changes",
          "transforms": "route",
          "transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
          "transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
          "transforms.route.replacement": "$1.cms_all",
          "key.converter": "org.apache.kafka.connect.json.JsonConverter",
          "key.converter.schemas.enable": "false",
          "value.converter": "org.apache.kafka.connect.json.JsonConverter",
          "value.converter.schemas.enable": "false",
          "include.unknown.datatypes": "true",
          "snapshot.mode": "initial",
          "heartbeat.interval.ms": "3000",
          "heartbeat.topics.prefix": "__debezium-heartbeat"
        }
      }
      

      Posted the question in stackoverflow and was advised this is a bug
      https://stackoverflow.com/questions/53326265/is-there-a-way-to-enable-support-in-debezium-postgres-connector-to-capture-compo/53326602#53326602

      Attachments

        Issue Links

          Activity

            People

              jpechane Jiri Pechanec
              pzieminski@gmail.com Pawel Zieminski
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: