Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5663

DB2 BLOB Sends No Data Logs Warning

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • 2.2-backlog
    • None
    • db2-connector
    • None
    • False
    • None
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      debezium-connector-db2-1.9.5.Final-plugin - modified to work around the default issue: https://issues.redhat.com/browse/DBZ-4990

      What is the connector configuration?

      {
      "name": "appmod_kc_cpc_cloud_select_tables_0000000000002_source_connector_0",
      "config":

      { "connector.class": "io.debezium.connector.db2.Db2Connector", "topic.creation.default.partitions": "1", "database.history.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"token\" password=\"kafka_token\";", "transforms": "unwrap", "database.history.ssl.protocol": "TLSv1.2", "include.schema.changes": "false", "transforms.unwrap.drop.tombstones": "false", "topic.creation.default.replication.factor": "3", "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState", "database.history.store.only.captured.tables.ddl": "true", "database.history.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"token\" password=\"kafka_token\";", "database.history.producer.sasl.mechanism": "PLAIN", "database.dbname": "redacted", "database.user": "redacted", "database.history.kafka.bootstrap.servers": "redacted:9093", "database.history.ssl.endpoint.identification.algorithm": "HTTPS", "database.sslConnection": "true", "topic.creation.enable": "true", "key.converter.schemas.enable": "false", "database.password": "redacted", "value.converter.schemas.enable": "false", "name": "appmod_kc_cpc_cloud_select_tables_0000000000002_source_connector_0", "database.history.store.only.monitored.tables.ddl": "true", "database.history.security.protocol": "SASL_SSL", "database.history.consumer.sasl.mechanism": "PLAIN", "snapshot.mode": "initial_only", "tasks.max": "1", "database.history.kafka.topic": "appmod_kc_cpc_cloud_select_tables_0000000000002_schema_changes", "database.history.consumer.security.protocol": "SASL_SSL", "database.history.producer.ssl.enabled.protocols": "TLSv1.2", "database.history.ssl.enabled.protocols": "TLSv1.2", "database.history.consumer.ssl.endpoint.identification.algorithm": "HTTPS", "database.history.skip.unparseable.ddl": "true", "database.history.sasl.mechanism": "PLAIN", "database.history.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"token\" password=\"kafka_token\";", "database.history.producer.ssl.endpoint.identification.algorithm": "HTTPS", "topic.creation.default.cleanup.policy": "delete", "database.history.producer.security.protocol": "SASL_SSL", "time.precision.mode": "connect", "database.server.name": "appmod_kc_cpc_cloud_select_tables_0000000000002_1", "event.processing.failure.handling.mode": "warn", "snapshot.isolation.mode": "read_committed", "database.history.producer.ssl.protocol": "TLSv1.2", "topic.creation.default.retention.ms": "2419200000", "database.port": "55000", "database.history.consumer.ssl.enabled.protocols": "TLSv1.2", "database.downgradeHoldCursorsUnderXa": "true", "database.history.consumer.ssl.protocol": "TLSv1.2", "database.hostname": "redacted", "schema.name.adjustment.mode": "none", "table.include.list": "KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST", "database.sslCertLocation": "/cos/public-keys/redacted.pem" }

      }

      What is the captured database version and mode of depoyment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      on-premisis

      0608010F    DB2 v11.5.7.0

      What behaviour do you expect?

      BLOB Data types should convert as needed and transit from source to kafka.

      What behaviour do you see?

      BLOB data type does not make it into kafka, logs a warning, null on loading into target DB.

      Remove defaults to avoid issues with: https://issues.redhat.com/browse/DBZ-4990

      CREATE TABLE "KC_CPC_EMPLOYEES_10K"."DATA_TYPE_TEST" (
      "BIGINTY" BIGINT WITH DEFAULT 4 ,
      "SMALLINTY" SMALLINT NOT NULL WITH DEFAULT 6 ,
      "TIMESTAMPY" TIMESTAMP WITH DEFAULT CURRENT TIMESTAMP ,
      "ID" INTEGER NOT NULL ,
      "DATEC" DATE WITH DEFAULT CURRENT DATE ,
      "BLOBY" BLOB(1048576) LOGGED NOT COMPACT )
      DATA CAPTURE CHANGES
      IN "USERSPACE1"
      ORGANIZE BY ROW;

      Do you see the same behaviour using the latest relesead Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      I see 1.9.6 is out, but have not tried it.

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)
      WARN Unexpected JDBC BINARY value for field BLOBY with schema Schema{BYTES}: class=class com.ibm.db2.jcc.am.c7, value=com.ibm.db2.jcc.am.c7@65c48ce6 (io.debezium.connector.db2.Db2ValueConverters)
       
      [2022-09-28 21:04:54,325] INFO WorkerSourceTask{id=appmod_kc_cpc_cloud_select_tables_0000000000002_source_connector_0-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask)
      [2022-09-28 21:04:54,326] INFO Metrics registered (io.debezium.pipeline.ChangeEventSourceCoordinator)
      [2022-09-28 21:04:54,326] INFO Context created (io.debezium.pipeline.ChangeEventSourceCoordinator)
      [2022-09-28 21:04:54,326] INFO No previous offset has been found (io.debezium.connector.db2.Db2SnapshotChangeEventSource)
      [2022-09-28 21:04:54,326] INFO According to the connector configuration both schema and data will be snapshotted (io.debezium.connector.db2.Db2SnapshotChangeEventSource)
      [2022-09-28 21:04:54,326] INFO Snapshot step 1 - Preparing (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,326] INFO WorkerSourceTask{id=appmod_kc_cpc_cloud_select_tables_0000000000002_source_connector_0-0} Executing source task (org.apache.kafka.connect.runtime.WorkerSourceTask)
      [2022-09-28 21:04:54,327] INFO Snapshot step 2 - Determining captured tables (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,408] INFO Adding table KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST to the list of capture schema tables (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,409] INFO Snapshot step 3 - Locking captured tables [KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST] (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,409] INFO Schema locking was disabled in connector configuration (io.debezium.connector.db2.Db2SnapshotChangeEventSource)
      [2022-09-28 21:04:54,409] INFO Snapshot step 4 - Determining snapshot offset (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,447] INFO Snapshot step 5 - Reading structure of captured tables (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,447] INFO Reading structure of schema 'KC_CPC_EMPLOYEES_10K' (io.debezium.connector.db2.Db2SnapshotChangeEventSource)
      [2022-09-28 21:04:54,674] INFO Snapshot step 6 - Persisting schema history (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,732] INFO [Producer clientId=appmod_kc_cpc_cloud_select_tables_0000000000002_1-dbhistory] Resetting the last seen epoch of partition appmod_kc_cpc_cloud_select_tables_0000000000002_schema_changes-0 to 0 since the associated topicId changed from null to xSMlyVmKSXmVkJfooKqLtQ (org.apache.kafka.clients.Metadata)
      [2022-09-28 21:04:54,863] INFO Snapshot step 7 - Snapshotting data (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,863] INFO Snapshotting contents of 1 tables while still in transaction (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,863] INFO Exporting data from table 'KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST' (1 of 1 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,863] INFO For table 'KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST' using select statement: 'SELECT "BIGINTY", "SMALLINTY", "TIMESTAMPY", "ID", "DATEC", "BLOBY" FROM KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST' (io.debezium.relational.RelationalSnapshotChangeEventSource)
      [2022-09-28 21:04:54,901] WARN Unexpected JDBC BINARY value for field BLOBY with schema Schema{BYTES}: class=class com.ibm.db2.jcc.am.c7, value=com.ibm.db2.jcc.am.c7@65c48ce6 (io.debezium.connector.db2.Db2ValueConverters)
      [2022-09-28 21:04:54,902] INFO Finished exporting 2 records for table 'KC_CPC_EMPLOYEES_10K.DATA_TYPE_TEST'; total duration '00:00:00.039' (io.debezium.relational.RelationalSnapshotChangeEventSource)

      How to reproduce the issue using our tutorial deployment?

      ALTER TABLE DB2INST1.CUSTOMERS 

      ADD COLUMN A_BLOB_COLUMN BLOB;

       

      INSERT INTO DB2INST1.CUSTOMERS(first_name,last_name,email,A_BLOB_COLUMN)

        VALUES ('Sally','Thomas','sally.thomas@acme.com', CAST('test' AS BLOB));

       

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

              ccranfor@redhat.com Chris Cranford
              kcogren KC Ogren (Inactive)
              Votes:
              2 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: