Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-9054

Oracle database PDB name in lowercase not collecting DML operation

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • Backlog
    • 3.1.1.Final
    • oracle-connector
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Moderate

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      Version : Debezium Oracle Connector 3.1.1.Final 

      What is the connector configuration?

       

       

      {"name": "customers-dbca","config": {"connector.class": "io.debezium.connector.oracle.OracleConnector","tasks.max": "1","database.hostname": "xx.xx.xx.xx","database.port": "xxx","database.user": "c##dbzuser","database.password": "dbz","database.dbname": "gblcdbis","database.pdb.name": "\"\"kafka\"\"","database.server.name": "test_gblpdb","table.include.list": "KUSER.CUSTOMERS","schema.history.internal.store.only.captured.tables.ddl": "true","topic.prefix": "GBLUATP","schema.history.internal.kafka.bootstrap.servers": "kafka:9092","schema.history.internal.kafka.topic": "schema-changes-newJ"}}

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      Oracle database 19c (On-premises )

      PDB : "gblpdb" small

      CDB: gblcdbis

      What behavior do you expect?

      Should be collecting all DML operation after initial table snapshot .

      What behavior do you see?

      Not collecting any DML operation (update,delete ,insert) using lowercase pdb name in oracle database .

      Do you see the same behaviour using the latest released Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Yes  using the latest version 

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      NO ERROR FOUND IN LOG 

      ==========================================

      2025-05-19 04:22:50,746 INFO   ||  Connector started for the first time.   [io.debezium.connector.common.BaseSourceTask]
      2025-05-19 04:22:50,747 INFO   ||  ConsumerConfig values:
              allow.auto.create.topics = true
              auto.commit.interval.ms = 5000
              auto.include.jmx.reporter = true
              auto.offset.reset = earliest
              bootstrap.servers = [kafka:9092]
              check.crcs = true
              client.dns.lookup = use_all_dns_ips
              client.id = GBLUATP-schemahistory
              client.rack =
              connections.max.idle.ms = 540000
              default.api.timeout.ms = 60000
              enable.auto.commit = false
              enable.metrics.push = true
              exclude.internal.topics = true
              fetch.max.bytes = 52428800
              fetch.max.wait.ms = 500
              fetch.min.bytes = 1
              group.id = GBLUATP-schemahistory
              group.instance.id = null
              group.protocol = classic
              group.remote.assignor = null
              heartbeat.interval.ms = 3000
              interceptor.classes = []
              internal.leave.group.on.close = true
              internal.throw.on.fetch.stable.offset.unsupported = false
              isolation.level = read_uncommitted
              key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
              max.partition.fetch.bytes = 1048576
              max.poll.interval.ms = 300000
              max.poll.records = 500
              metadata.max.age.ms = 300000
              metadata.recovery.strategy = none
              metric.reporters = []
              metrics.num.samples = 2
              metrics.recording.level = INFO
              metrics.sample.window.ms = 30000
              partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
              receive.buffer.bytes = 65536
              reconnect.backoff.max.ms = 1000
              reconnect.backoff.ms = 50
              request.timeout.ms = 30000
              retry.backoff.max.ms = 1000
              retry.backoff.ms = 100
              sasl.client.callback.handler.class = null
              sasl.jaas.config = null
              sasl.kerberos.kinit.cmd = /usr/bin/kinit
              sasl.kerberos.min.time.before.relogin = 60000
              sasl.kerberos.service.name = null
              sasl.kerberos.ticket.renew.jitter = 0.05
              sasl.kerberos.ticket.renew.window.factor = 0.8
              sasl.login.callback.handler.class = null
              sasl.login.class = null
              sasl.login.connect.timeout.ms = null
              sasl.login.read.timeout.ms = null
              sasl.login.refresh.buffer.seconds = 300
              sasl.login.refresh.min.period.seconds = 60
              sasl.login.refresh.window.factor = 0.8
              sasl.login.refresh.window.jitter = 0.05
              sasl.login.retry.backoff.max.ms = 10000
              sasl.login.retry.backoff.ms = 100
              sasl.mechanism = GSSAPI
              sasl.oauthbearer.clock.skew.seconds = 30
              sasl.oauthbearer.expected.audience = null
              sasl.oauthbearer.expected.issuer = null
              sasl.oauthbearer.header.urlencode = false
              sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
              sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
              sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
              sasl.oauthbearer.jwks.endpoint.url = null
              sasl.oauthbearer.scope.claim.name = scope
              sasl.oauthbearer.sub.claim.name = sub
              sasl.oauthbearer.token.endpoint.url = null
              security.protocol = PLAINTEXT
              security.providers = null
              send.buffer.bytes = 131072
              session.timeout.ms = 10000
              socket.connection.setup.timeout.max.ms = 30000
              socket.connection.setup.timeout.ms = 10000
              ssl.cipher.suites = null
              ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
              ssl.endpoint.identification.algorithm = https
              ssl.engine.factory.class = null
              ssl.key.password = null
              ssl.keymanager.algorithm = SunX509
              ssl.keystore.certificate.chain = null
              ssl.keystore.key = null
              ssl.keystore.location = null
              ssl.keystore.password = null
              ssl.keystore.type = JKS
              ssl.protocol = TLSv1.3
              ssl.provider = null
              ssl.secure.random.implementation = null
              ssl.trustmanager.algorithm = PKIX
              ssl.truststore.certificates = null
              ssl.truststore.location = null
              ssl.truststore.password = null
              ssl.truststore.type = JKS
              value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
         [org.apache.kafka.clients.consumer.ConsumerConfig]
      2025-05-19 04:22:50,748 INFO   ||  initializing Kafka metrics collector   [org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector]
      2025-05-19 04:22:50,750 INFO   ||  Kafka version: 3.9.0   [org.apache.kafka.common.utils.AppInfoParser]
      2025-05-19 04:22:50,750 INFO   ||  Kafka commitId: a60e31147e6b01ee   [org.apache.kafka.common.utils.AppInfoParser]
      2025-05-19 04:22:50,750 INFO   ||  Kafka startTimeMs: 1747628570750   [org.apache.kafka.common.utils.AppInfoParser]
      2025-05-19 04:22:50,753 INFO   ||  [Consumer clientId=GBLUATP-schemahistory, groupId=GBLUATP-schemahistory] Cluster ID: k9h1ebbBTPWboVB_-r0jdQ   [org.apache.kafka.clients.Metadata]
      2025-05-19 04:22:50,754 INFO   ||  [Consumer clientId=GBLUATP-schemahistory, groupId=GBLUATP-schemahistory] Resetting generation and member id due to: consumer pro-actively leaving the group   [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
      2025-05-19 04:22:50,754 INFO   ||  [Consumer clientId=GBLUATP-schemahistory, groupId=GBLUATP-schemahistory] Request joining group due to: consumer pro-actively leaving the group   [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator]
      2025-05-19 04:22:50,754 INFO   ||  Metrics scheduler closed   [org.apache.kafka.common.metrics.Metrics]
      2025-05-19 04:22:50,754 INFO   ||  Closing reporter org.apache.kafka.common.metrics.JmxReporter   [org.apache.kafka.common.metrics.Metrics]
      2025-05-19 04:22:50,754 INFO   ||  Closing reporter org.apache.kafka.common.telemetry.internals.ClientTelemetryReporter   [org.apache.kafka.common.metrics.Metrics]
      2025-05-19 04:22:50,754 INFO   ||  Metrics reporters closed   [org.apache.kafka.common.metrics.Metrics]
      2025-05-19 04:22:50,756 INFO   ||  App info kafka.consumer for GBLUATP-schemahistory unregistered   [org.apache.kafka.common.utils.AppInfoParser]
      2025-05-19 04:22:50,756 INFO   ||  No previous offset found   [io.debezium.connector.oracle.OracleConnectorTask]
      2025-05-19 04:22:50,756 INFO   ||  Requested thread factory for component OracleConnector, id = GBLUATP named = SignalProcessor   [io.debezium.util.Threads]
      2025-05-19 04:22:50,757 INFO   ||  Requested thread factory for component OracleConnector, id = GBLUATP named = change-event-source-coordinator   [io.debezium.util.Threads]
      2025-05-19 04:22:50,757 INFO   ||  Requested thread factory for component OracleConnector, id = GBLUATP named = blocking-snapshot   [io.debezium.util.Threads]
      2025-05-19 04:22:50,757 INFO   ||  Creating thread debezium-oracleconnector-GBLUATP-change-event-source-coordinator   [io.debezium.util.Threads]
      2025-05-19 04:22:50,757 INFO   ||  WorkerSourceTask{id=customers-dbca-0} Source task finished initialization and start   [org.apache.kafka.connect.runtime.AbstractWorkerSourceTask]
      2025-05-19 04:22:50,759 INFO   Oracle|GBLUATP|snapshot  Metrics registered   [io.debezium.pipeline.ChangeEventSourceCoordinator]
      2025-05-19 04:22:50,759 INFO   Oracle|GBLUATP|snapshot  Context created   [io.debezium.pipeline.ChangeEventSourceCoordinator]
      2025-05-19 04:22:50,759 INFO   Oracle|GBLUATP|snapshot  According to the connector configuration both schema and data will be snapshot.   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:50,762 INFO   Oracle|GBLUATP|snapshot  Snapshot step 1 - Preparing   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:50,764 INFO   Oracle|GBLUATP|snapshot  Snapshot step 2 - Determining captured tables   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,214 INFO   Oracle|GBLUATP|snapshot  Adding table "kafka".KUSER.CUSTOMERS to the list of capture schema tables   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,219 INFO   Oracle|GBLUATP|snapshot  Created connection pool with 1 threads   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,220 INFO   Oracle|GBLUATP|snapshot  Snapshot step 3 - Locking captured tables ["kafka".KUSER.CUSTOMERS]   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,223 INFO   Oracle|GBLUATP|snapshot  Snapshot step 4 - Determining snapshot offset   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,317 INFO   Oracle|GBLUATP|snapshot          No in-progress transactions will be captured.   [io.debezium.connector.oracle.logminer.LogMinerAdapter]
      2025-05-19 04:22:52,319 INFO   Oracle|GBLUATP|snapshot  Connection gracefully closed   [io.debezium.jdbc.JdbcConnection]
      2025-05-19 04:22:52,319 INFO   Oracle|GBLUATP|snapshot  Snapshot step 5 - Reading structure of captured tables   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:52,319 INFO   Oracle|GBLUATP|snapshot  Only captured tables schema should be captured, capturing: ["kafka".KUSER.CUSTOMERS]   [io.debezium.connector.oracle.OracleSnapshotChangeEventSource]
      2025-05-19 04:22:52,412 INFO   Oracle|GBLUATP|snapshot          Registering '"kafka".KUSER.CUSTOMERS' attributes: object_id=74317, data_object_id=74317   [io.debezium.connector.oracle.OracleConnection]
      2025-05-19 04:22:53,190 INFO   Oracle|GBLUATP|snapshot  Snapshot step 6 - Persisting schema history   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:53,191 INFO   Oracle|GBLUATP|snapshot  Capturing structure of table "kafka".KUSER.CUSTOMERS   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,360 INFO   Oracle|GBLUATP|snapshot  Already applied 1 database changes   [io.debezium.relational.history.SchemaHistoryMetrics]
      2025-05-19 04:22:55,364 INFO   Oracle|GBLUATP|snapshot  Snapshot step 7 - Snapshotting data   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,365 INFO   Oracle|GBLUATP|snapshot  Creating snapshot worker pool with 1 worker thread(s)   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,366 INFO   Oracle|GBLUATP|snapshot  For table '"kafka".KUSER.CUSTOMERS' using select statement: 'SELECT "CUSTOMER_ID", "FIRST_NAME", "LAST_NAME", "EMAIL", "PHONE_NUMBER", "ADDRESS", "REGISTRATION_DATE" FROM "KUSER"."CUSTOMERS" AS OF SCN 6103182226952'   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,369 INFO   Oracle|GBLUATP|snapshot  Exporting data from table '"kafka".KUSER.CUSTOMERS' (1 of 1 tables)   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,387 INFO   Oracle|GBLUATP|snapshot           Finished exporting 10 records for table '"kafka".KUSER.CUSTOMERS' (1 of 1 tables); total duration '00:00:00.018'   [io.debezium.relational.RelationalSnapshotChangeEventSource]
      2025-05-19 04:22:55,390 INFO   Oracle|GBLUATP|snapshot  Snapshot - Final stage   [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource]
      2025-05-19 04:22:55,394 INFO   Oracle|GBLUATP|snapshot  Snapshot completed   [io.debezium.pipeline.source.AbstractSnapshotChangeEventSource]
      2025-05-19 04:22:55,394 INFO   Oracle|GBLUATP|snapshot  Snapshot ended with SnapshotResult [status=COMPLETED, offset=OracleOffsetContext [scn=6103182226952, commit_scn=[], lcr_position=null]]   [io.debezium.pipeline.ChangeEventSourceCoordinator]
      2025-05-19 04:22:55,394 INFO   Oracle|GBLUATP|streaming  Connected metrics set to 'true'   [io.debezium.pipeline.ChangeEventSourceCoordinator]
      2025-05-19 04:22:55,395 INFO   Oracle|GBLUATP|streaming  SignalProcessor started. Scheduling it every 5000ms   [io.debezium.pipeline.signal.SignalProcessor]
      2025-05-19 04:22:55,395 INFO   Oracle|GBLUATP|streaming  Creating thread debezium-oracleconnector-GBLUATP-SignalProcessor   [io.debezium.util.Threads]
      2025-05-19 04:22:55,395 INFO   Oracle|GBLUATP|streaming  Starting streaming   [io.debezium.pipeline.ChangeEventSourceCoordinator]
      2025-05-19 04:22:55,768 INFO   ||  11 records sent during previous 00:00:05.13, last recorded offset of {server=GBLUATP} partition is {snapshot_scn=6103182226952, snapshot=INITIAL, scn=6103182226952, snapshot_completed=true}   [io.debezium.connector.common.BaseSourceTask]
      2025-05-19 04:22:55,787 WARN   ||  [Producer clientId=connector-producer-customers-dbca-0] The metadata response from the cluster reported a recoverable issue with correlation id 4 : {GBLUATP=LEADER_NOT_AVAILABLE}   [org.apache.kafka.clients.NetworkClient]
      2025-05-19 04:22:55,908 WARN   ||  [Producer clientId=connector-producer-customers-dbca-0] The metadata response from the cluster reported a recoverable issue with correlation id 7 : {GBLUATP.KUSER.CUSTOMERS=LEADER_NOT_AVAILABLE}   [org.apache.kafka.clients.NetworkClient]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming  Redo Log Group Sizes:   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #1: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #2: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #3: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #4: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #5: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #6: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #7: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #8: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #9: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #10: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #11: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]
      2025-05-19 04:22:56,146 INFO   Oracle|GBLUATP|streaming         Group #12: 2147483648 bytes   [io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource]

      ==============================

      How to reproduce the issue using our tutorial deployment?

      <Your answer>

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

              Unassigned Unassigned
              jaimin_s2 Jaimin S (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: