Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-3954

Oracle Connector replicating data from all PDBs. Missing PDB filter during replication.

XMLWordPrintable

    • False
    • False
    • Hide

      Create two PDBs in oracle 19c and create same schema and table in both. 

      create table test_schema.test(
      id number,
      test varchar(20),
      );
      ALTER TABLE test_schema.test ADD CONSTRAINT test_pk PRIMARY KEY (id);
      ALTER TABLE test_schema.test ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

      Lets assume that PDB1 is our source database
      Now initlize debezium connector like this with PDB1 specified :

      {
            "connector.class" :  "io.debezium.connector.oracle.OracleConnector",
            "database.connection.adapter":  "logminer",
            "tasks.max" :  "1",
            "database.server.name" :  "oracleserverpdb",
            "database.hostname" :  "IP",
            "database.port" :  "1521",
            "database.user" :  "c##dbzuser",
            "database.password" :  "****",
            "database.dbname": "orcl",
            "database.pdb.name": "PDB1",
            "database.history.kafka.bootstrap.servers" :  "IP:9092",
            "database.history.kafka.topic":  "schema-changes.inventory",
            "snapshot.mode": "schema_only",
            "table.include.list":  "test_schema.test"
      }
      

       
      After successful connector creation execute some insert on PDB2 (not that specified in connector config)

      insert into test_schema.test values (1,'a');

      In Kafka You will see changes captured from PDB2.
      Debezium instead of skipping these changes is replicating them.

      Show
      Create two PDBs in oracle 19c and create same schema and table in both.  create table test_schema.test( id number , test varchar (20), ); ALTER TABLE test_schema.test ADD CONSTRAINT test_pk PRIMARY KEY (id); ALTER TABLE test_schema.test ADD SUPPLEMENTAL LOG DATA ( ALL ) COLUMNS ; Lets assume that PDB1 is our source database Now initlize debezium connector like this with PDB1 specified : {      "connector.class"  :   "io.debezium.connector.oracle.OracleConnector" ,      "database.connection.adapter" :   "logminer" ,      "tasks.max"  :   "1" ,      "database.server.name"  :   "oracleserverpdb" ,      "database.hostname"  :   "IP" ,      "database.port"  :   "1521" ,      "database.user"  :   "c##dbzuser" ,      "database.password"  :   "****" , "database.dbname" : "orcl" , "database.pdb.name" : "PDB1" ,      "database.history.kafka.bootstrap.servers"  :   "IP:9092" ,      "database.history.kafka.topic" :   "schema-changes.inventory" , "snapshot.mode" : "schema_only" ,      "table.include.list" :   "test_schema.test" }   After successful connector creation execute some insert on PDB2 (not that specified in connector config) insert into test_schema.test values (1, ' a ' ); In Kafka You will see changes captured from PDB2. Debezium instead of skipping these changes is replicating them.

      Issue: Oracle connector is replicating data from tables in all PDBs despite one PDB specified in configuration. 

      Environment:   

      DB host: Oracle linux 7

      DB version: Oracle 19c 

      Oracle connector: 1.6.1 Final | using LogMiner

      Debezium on classic kafka and kafka connect 

      Expected result:
      Retrive data from included tables from specified PDB

      Actual result:

      Retrive data from included tables but from all PDBs in db server.

      Some investigation on this issue:

      Logminer is logging changes from all tables at CDB$ROOT level but debezium is not filtering them by PDB.

      You can use where filter on column SRC_CON_NAME where is change source PDB specified.  

      Short discussion on gitter available under this link.

              ccranfor@redhat.com Chris Cranford
              domis97 Dominik Maciejewski (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: