-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
None
-
None
-
False
-
-
False
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
All DBA did was to increase the undo table space after seeing the following error
• • • ORA-01555: snapshot too old: rollback segment number 30 with name \"_SYSSMU30_2784324097$\" too small\n\n[*https://docs.oracle.com/error-help/db/ora-01555/*]\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:709)\n\tat{}
This prompted our DBA to increase undo table space . After that Debezium kept on giving this error and never got fixed with multiple restarts. Only a Database restart fixed it.
java.sql.SQLException: ORA-01157: cannot identify/lock data file 38 - see DBWR trace file\nORA-01110: data file 38: '+FOLDER/FOLDER/DATAFILE/undotbs1.699.1207322847'\n\nhttps://docs.oracle.com/error-help/db/ora-01157/\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:709)\n\tat oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:609)\n\t
Above undotbs1.... is the file that was changed by DBA.
Wondering why Debezium checks this file in the first place ?
Only a DB node restart fixed the problem after a Debezium restart.
What Debezium connector do you use and what version?
implementation 'io.debezium:debezium-api:3.1.1.Final'
implementation'io.debezium:debezium-embedded:3.1.1.Final'
implementation 'org.apache.kafka:connect-runtime:3.8.0'
implementation 'io.debezium:debezium-connector-oracle:3.1.1.Final'
implementation "com.oracle.database.jdbc:ojdbc11:23.5.0.24.07"
implementation 'io.debezium:debezium-storage-kafka:3.1.1.Final'
What is the connector configuration?
decimalHandlingMode: string name: <<NAME>> class: io.debezium.connector.oracle.OracleConnector offsetTopic: <<OFFSETTOPICNAME>> offsetTopicNumberOfPartitions: 1 offsetTopicReplicationFactor: 1 heartbeatIntervalInMsecs: 100000 heartbeatQuery: SELECT * FROM MYTABLE WHERE ROWNUM <= 1 database.query.timeout.ms: 600000 .with("name", wmsDbzConnectorName) .with("connector.class", wmsDbzConnectorClass) .with("offset.storage", "org.apache.kafka.connect.storage.KafkaOffsetBackingStore") .with("offset.storage.topic", debeziumConnectorOffsetTopic) .with(DistributedConfig.OFFSET_STORAGE_PARTITIONS_CONFIG, debeziumConnectorOffsetTopicPartitions) .with(DistributedConfig.OFFSET_STORAGE_REPLICATION_FACTOR_CONFIG, debeziumConnectorOffsetTopicReplicationFactor) .with("offset.flush.interval.ms", "60000") .with("database.hostname", wmsDbHost) .with("database.port", port) .with("database.user", username) .with("database.password", password) .with("database.dbname", dbName) .with("schema.include.list", schemaList) .with("table.include.list", tableList) .with("include.schema.changes", "false") .with("topic.prefix", topicPrefix) .with("database.server.name", dbserverName) .with("snapshot.mode", snapshotMode) // It is set as initial .with("converter.schemas.enable", "false") .with("decimal.handling.mode", decimalHandlingMode) .with("heartbeat.interval.ms", heartbeatInterval) .with("heartbeat.action.query", heartbeatActionQuery) .with("database.query.timeout", 600000) .with("schema.history.internal.kafka.topic", schemaTopic) .with("schema.history.internal.kafka.bootstrap.servers", schemaBootstrapServers) .with("schema.history.internal.consumer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.consumer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.consumer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.consumer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.consumer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.consumer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.consumer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.consumer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("schema.history.internal.producer.security.protocol", schemaSecurityProtocol) .with("schema.history.internal.producer.ssl.keystore.type", schemaSslKeyStoreType) .with("schema.history.internal.producer.ssl.keystore.location", schemaSslKeystoreLocation) .with("schema.history.internal.producer.ssl.keystore.password", schemaSslKeystorePassword) .with("schema.history.internal.producer.ssl.truststore.type", schemaSslTrustStoreType) .with("schema.history.internal.producer.ssl.truststore.location", schemaSslTruststoreLocation) .with("schema.history.internal.producer.ssl.truststore.password", schemaSslTruststorePassword) .with("schema.history.internal.producer.ssl.endpoint.identification.algorithm", sslEndpointAlgorithm) .with("bootstrap.servers", schemaBootstrapServers) .with("security.protocol", schemaSecurityProtocol) .with("ssl.keystore.location", schemaSslKeystoreLocation) .with("ssl.keystore.password", schemaSslKeystorePassword) .with("ssl.truststore.location", schemaSslTruststoreLocation) .with("ssl.truststore.password", schemaSslTruststorePassword) .with("ssl.endpoint.identification.algorithm", sslEndpointAlgorithm)
What is the captured database version and mode of deployment?
(E.g. on-premises, with a specific cloud provider, etc.)
Data base in running on a data center and is an Oracle database of version 19c.
DB Server on Dev Database runs on a 2 node RAC. The Archived log files (which Log Miner reads from) are shared to both nodes in RAC as a shared path.
What behavior do you expect?
Was expected Debezium not to rely on undo table space file. Why is it looking at that to cause problem
What behavior do you see?
As shared above
Do you have the connector logs, ideally from start till finish?
Yes
How to reproduce the issue using our tutorial deployment?
Make changes to the undo table space file
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
Continous flow of Debezium events without interruption
Implementation ideas (optional)
No need to look for status of that file.