Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8070

In hybrid log.mining.strategy reconstruction logs should be set to DEBUG

XMLWordPrintable

    • Moderate

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      Debezium Connector for Oracle - version 2.7.0.Final

      Running on Strimzi / OpenShift

      What is the connector configuration?

       

      apiVersion: kafka.strimzi.io/v1beta2
      kind: KafkaConnector
      metadata: 
        name: my-oracle-debezium-connector
        labels:
          strimzi.io/cluster: connect-cluster
      spec:
        class: io.debezium.connector.oracle.OracleConnector
        tasksMax: 1
        autoRestart:
          enabled: false
          maxRestarts: 5
        config:
          log.mining.strategy: hybrid
          key.converter: io.confluent.connect.avro.AvroConverter
          key.converter.schema.registry.url: https://schema-registry-url/
          value.converter: io.confluent.connect.avro.AvroConverter
          value.converter.schema.registry.url: https://schema-registry-url/
          database.user: ${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbUsername}
          database.password: ${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbPassword}
          database.url: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=DB_HOSTNAME)(PORT=1521)))(CONNECT_DATA=(SID=ASYRHE01)))
          database.dbname: ASYRHE01
          topic.prefix: ch-syrius-uk-streams-api.cdc
          schema.include.list: HEVPROD
          table.include.list: HEVPROD.TABLE1,HEVPROD.TABLE2,HEVPROD.TABLE3,...
          snapshot.max.threads: 10
          time.precision.mode: connect
          decimal.handling.mode: precise
          schema.name.adjustment.mode: avro
          field.name.adjustment.mode: avro
          log.mining.batch.size.default: 100000
          log.mining.batch.size.min: 10000
          log.mining.batch.size.max: 200000
          heartbeat.interval.ms: 300000
          notification.enabled.channels: log,sink
          notification.sink.topic.name: __debezium-notification.ch-syrius-uk-streams-api.cdc
          schema.history.internal.kafka.bootstrap.servers: confluent-kafka-bootstrap-server
          schema.history.internal.kafka.topic: ch-syrius-uk-streams-api.cdc.database-schema-history
          schema.history.internal.store.only.captured.databases.ddl: true
          schema.history.internal.store.only.captured.tables.ddl: true
          schema.history.internal.producer.security.protocol: SASL_SSL
          schema.history.internal.producer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG}
          schema.history.internal.producer.sasl.mechanism: PLAIN
          schema.history.internal.consumer.security.protocol: SASL_SSL
          schema.history.internal.consumer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG}
          schema.history.internal.consumer.sasl.mechanism: PLAIN
          signal.enabled.channels: source,kafka
          signal.data.collection: ASYRHE01.DBZ.__debezium_signal
          signal.kafka.bootstrap.servers: confluent-kafka-bootstrap-server
          signal.kafka.consumer.offset.commit.enabled: true
          signal.kafka.groupId: connect-ch-syrius.cdc-signal
          signal.kafka.topic: __debezium-signaling.ch-syrius-uk-streams-api.cdc
          signal.consumer.security.protocol: SASL_SSL
          signal.consumer.sasl.jaas.config: ${env:SASL_JAAS_CONFIG}
          signal.consumer.sasl.mechanism: PLAIN
          signal.consumer.key.converter: org.apache.kafka.connect.storage.StringConverter
          signal.consumer.value.converter: org.apache.kafka.connect.json.JsonConverter
          incremental.snapshot.watermarking.strategy: insert_insert
          transforms: Filter
          transforms.Filter.type: org.apache.kafka.connect.transforms.Filter
          transforms.Filter.predicate: IsSignal
          predicates: IsSignal
          predicates.IsSignal.type: org.apache.kafka.connect.transforms.predicates.TopicNameMatches
          predicates.IsSignal.pattern: .*\.__debezium_signal 

       

       

      What is the captured database version and mode of deployment?

      (E.g. on-premises, with a specific cloud provider, etc.)

      DB running on Oracle Database 19c Enterprise Edition

      Deployment: SaaS solution hosted on Swisscom infrastructure

      What behavior do you expect?

      <Your answer>

      What behavior do you see?

      Over-consumption of CPU maybe due to a high volume of logs generated. See https://debezium.zulipchat.com/#narrow/stream/348250-community-oracle/topic/High.20cpu.20consumption.20and.20logs.20output.20-.20deadlock.20situation.20.3F

      Do you see the same behaviour using the latest released Debezium version?

      (Ideally, also verify with latest Alpha/Beta/CR version)

      Running the latest release

      Do you have the connector logs, ideally from start till finish?

      (You might be asked later to provide DEBUG/TRACE level log)

      Yes

      How to reproduce the issue using our tutorial deployment?

      <Your answer>

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      Change log level to DEBUG for the entries lines 365, 368, 372, 375 in AbstractLogMinerEventProcessor class: https://github.com/debezium/debezium/blob/main/debezium-connector-oracle/src/main/java/io/debezium/connector/oracle/logminer/processor/AbstractLogMinerEventProcessor.java

            ccranfor@redhat.com Chris Cranford
            thibaut.cazenave Thibaut Cazenave
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: