Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8180

Oracle connector: Some of the column values are being set to null during streaming

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Blocker Blocker
    • None
    • 2.5.0.Final
    • oracle-connector
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Critical

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      2.5

      What is the connector configuration?

      connector_config.json

      Here are the table DDL, archive log and the kafka message for reference:

      rpro_rc_line_g_ddl.sql

      client_id_0_archive_logs_scn_3235181181.csv

      client_id_0_kafka_msg_after.json

      client_id_0_kafka_msg.json

      client_id_0_kafka_vs_redo_vs_coldef.csv

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      Oracle v19.0 hosted on AWS EC2

      What behavior do you expect?

      The archive log should be parsed properly with values for every column

      What behavior do you see?

      Some of the columns are being set to null. In fact all the column values up until a specific point are parsed and set properly. Only say the last half of the columns are set to null - in the order of insert.

      Community thread for reference: community link

      Do you see the same behaviour using the latest released Debezium version?

      Have not verified yet. In fact we are no able to reproduce this issue. It happens very rarely - say one in may be 100m events. But happens on only a specific table. It has around 300 columns. We tried to create a new connector to process from that scn and this time the connector parsed all the column values and produced a valid kafka message.

      Do you have the connector logs, ideally from start till finish?

      We do not have trace logs enabled in production. As this happens very rarely, we are unable to put a time against the same to capture logs for a specific time. We will still try to get the logs and add here once we have them.

      How to reproduce the issue using our tutorial deployment?

      Unable to reproduce

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

        1. connector_config.json
          9 kB
        2. rpro_rc_line_g_ddl.sql
          13 kB
        3. client_id_0_archive_logs_scn_3235181181.csv
          20 kB
        4. client_id_0_kafka_msg_after.json
          8 kB
        5. client_id_0_kafka_vs_redo_vs_coldef.csv
          21 kB
        6. client_id_0_kafka_msg.json
          13 kB
        7. dbz-trace-1.log
          32 kB
        8. dbz-redo-1.xlsx
          17 kB
        9. dbz-trace-2.log
          34 kB
        10. dbz-redo-2.xlsx
          14 kB
        11. scn-428083918254.xlsx
          45 kB
        12. dbz_trace_3.log
          39 kB
        13. scn-428294654545.xlsx
          21 kB

              Unassigned Unassigned
              mvigneswar-zuora Mithun Vigneswar G (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: