Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-3940

Outbox Event Router not working in Oracle Connector

    XMLWordPrintable

Details

    Description

      Hello.

      I have successfully deployed and tested the Oracle connector using the latest nightly version (1.7.0-20210831.000246-215) with Strimzi 0.25 and Oracle 19c Enterprise.
      But unfortunately, Outbox SMT is not working for me (I have also tested 1.7.0.Beta1).
      Maybe the problem is on my side so any help would be appreciated.

      First, I have tested the Oracle connector using this config to see if it's working in general or not:

      config:
          database.hostname: 192.168.99.108
          database.port: 1521
          database.user: C##DBZUSER
          database.password: dbz
          database.dbname: ORCLCDB
          database.pdb.name: ORCLPDB1
          database.server.name: server1
          database.connection.adapter: logminer
          schema.include.list: C##DBZUSER
          table.include.list: C##DBZUSER.OUTBOX
          tombstones.on.delete: false
          database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
          database.history.kafka.topic: schema-changes.outbox
      

      And I can confirm that it's working completely and events are published into server1.C__DBZUSER.OUTBOX topic. (I have used the "setup-logminer.sh" from oracle-vagrant-box github to setup the LogMiner).
      So everything works great so far.

      But when I change my config to use Outbox SMT, I expect to receive events inside outbox.event.order topic but unfortunately it's not happening.

        config:
          database.hostname: 192.168.99.108
          database.port: 1521
          database.user: C##DBZUSER
          database.password: dbz
          database.dbname: ORCLCDB
          database.pdb.name: ORCLPDB1
          database.server.name: server1
          database.connection.adapter: logminer
          schema.include.list: C##DBZUSER
          table.include.list: C##DBZUSER.OUTBOX
          table.field.event.id: ID
          table.field.event.key: AGGREGATE_ID
          table.field.event.payload: PAYLOAD
          table.field.event.timestamp: TIMESTAMP
          table.field.event.payload.id: AGGREGATE_ID
          route.by.field: AGGREGATE_TYPE
          tombstones.on.delete: false
          transforms: outbox
          transforms.outbox.type: io.debezium.transforms.outbox.EventRouter
          transforms.outbox.route.topic.replacement: outbox.event​.${routedByValue}
          transforms.outbox.table.fields.additional.placement: type:header:eventType
          poll.interval.ms: 100
      

      I'm also monitoring logs of my KafkaConnect pod but there are no specific logs or exceptions regarding this issue even after inserting and committing a new row inside the outbox table. Update: Read the comments.

      My Outbox table schema:

      SQL> describe OUTBOX;
       Name                                      Null?    Type
       ----------------------------------------- -------- ----------------------
       ID                                        NOT NULL RAW(255)
       AGGREGATE_ID                              NOT NULL VARCHAR2(255 CHAR)
       AGGREGATE_TYPE                            NOT NULL VARCHAR2(255 CHAR)
       PAYLOAD                                            CLOB
       TIMESTAMP                                          TIMESTAMP(6)
       TYPE                                      NOT NULL VARCHAR2(255 CHAR)
      

      One example of the data that I insert:

      SQL> insert into outbox (aggregate_id, aggregate_type, timestamp, type, id, payload) values ('111', 'order', '30-AUG-21 10.44.14.442466 AM', 'order-placement', 'E9571F8DE92A48AFB1A9498BB8F297B1', '{"id":202,"itemId":12,"quantity":5,"customerId":25}');
      
      SQL> commit;

      And I have enabled SUPPLEMENTAL LOG DATA for the table.

      ALTER TABLE c##dbzuser.outbox ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; 
      

      I have tested inserting with both Hibernate from a Java application and Oracle Sqlplus.
      And I tried to use different variations of the connector config (like adding key and value serializers, removing schema.include.list, also adding database.history.kafka.* configs, and ...) but nothing works.

      Is this a bug or am I doing something wrong?

      Thanks in advance.

       

      Attachments

        Activity

          People

            ccranfor@redhat.com Chris Cranford
            p30sina Sina Nourian (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: