-
Bug
-
Resolution: Cannot Reproduce
-
Blocker
-
None
-
2.2.1.Final
-
None
-
False
-
None
-
False
-
Important
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
Oracle Connector 2.2.1Final
What is the connector configuration?
"log.mining.buffer.type": "infinispan_embedded", "log.mining.buffer.infinispan.cache.transactions": "<infinispan><cache-container><local-cache name=\"transactions\"><persistence passivation=\"false\"><file-store read-only=\"false\" preload=\"true\" path=\"\\\\xxx\\debezium_cache\" /></persistence></local-cache></cache-container></infinispan>", "log.mining.buffer.infinispan.cache.processed_transactions": "<infinispan><cache-container><local-cache name=\"processed-transactions\"><persistence passivation=\"false\"><file-store read-only=\"false\" preload=\"true\" path=\"\\\\xxx\\debezium_cache\" /></persistence></local-cache></cache-container></infinispan>", "log.mining.buffer.infinispan.cache.events": "<infinispan><cache-container><local-cache name=\"events\"><persistence passivation=\"false\"><file-store read-only=\"false\" preload=\"true\" path=\"\\\\xxx\\debezium_cache\" /></persistence></local-cache></cache-container></infinispan>", "log.mining.buffer.infinispan.cache.schema_changes": "<infinispan><cache-container><local-cache name=\"schema-changes\"><persistence passivation=\"false\"><file-store read-only=\"false\" preload=\"true\" path=\"\\\\xxx\\debezium_cache\" /></persistence></local-cache></cache-container></infinispan>",
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
On-premises Oracle database,
What behaviour do you expect?
Infinispan embedded works, and Debezium doesn't lost events during crash one of machine with high availability setup.
What behaviour do you see?
I have 3 machines connected as Kafka cluster (3 brokers).
I am trying to test high availability with crashing one of the machines when Debezium is processing events.
The original issue was, that when I am stopping machine during processing the transaction with aprox. 150 000 updates. With buffering type "memory" I am losing 200-1000 events when it switches to other machine.
I tried to use infinispan embedded with shared path for the cache files.
Cache loaded properly at startup:
Using Infinispan in embedded mode. (io.debezium.connector.oracle.logminer.processor.infinispan.EmbeddedInfinispanLogMinerEventProcessor:73)
[2023-05-19 14:22:46,136] INFO [SRC.CDC.SONATA.DATA|task-0] ISPN000556: Starting user marshaller 'org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller' (org.infinispan.CONTAINER:36)
[2023-05-19 14:22:46,795] INFO [SRC.CDC.SONATA.DATA|task-0] JBoss Threads version 2.3.3.Final (org.jboss.threads:52)
[2023-05-19 14:22:47,003] INFO [SRC.CDC.SONATA.DATA|task-0] Overall Cache Statistics: (io.debezium.connector.oracle.logminer.processor.infinispan.AbstractInfinispanLogMinerEventProcessor:71)
[2023-05-19 14:22:47,012] INFO [SRC.CDC.SONATA.DATA|task-0] Transactions : 0 (io.debezium.connector.oracle.logminer.processor.infinispan.AbstractInfinispanLogMinerEventProcessor:72)
[2023-05-19 14:22:47,013] INFO [SRC.CDC.SONATA.DATA|task-0] Recent Transactions : 0 (io.debezium.connector.oracle.logminer.processor.infinispan.AbstractInfinispanLogMinerEventProcessor:73)
[2023-05-19 14:22:47,016] INFO [SRC.CDC.SONATA.DATA|task-0] Schema Changes : 0 (io.debezium.connector.oracle.logminer.processor.infinispan.AbstractInfinispanLogMinerEventProcessor:74)
[2023-05-19 14:22:47,017] INFO [SRC.CDC.SONATA.DATA|task-0] Events : 0 (io.debezium.connector.oracle.logminer.processor.infinispan.AbstractInfinispanLogMinerEventProcessor:75)
After running update on oracle database I am getting exception and connector stops:
[2023-05-19 14:26:50,811] ERROR [SRC.CDC.SONATA.DATA|task-0] There was a problem moving indexes for compactor with file 1 (org.infinispan.persistence.sifs.Compactor:641) java.lang.IllegalStateException: Too many records for this key (short overflow) at org.infinispan.persistence.sifs.IndexNode.copyWith(IndexNode.java:680) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:408) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:402) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:524) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:398) at io.reactivex.rxjava3.internal.subscribers.LambdaSubscriber.onNext(LambdaSubscriber.java:65) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$ObserveOnSubscriber.runAsync(FlowableObserveOn.java:404) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.run(FlowableObserveOn.java:178) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run(ExecutorScheduler.java:324) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.runEager(ExecutorScheduler.java:289) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.run(ExecutorScheduler.java:250) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.base/java.lang.Thread.run(Thread.java:833) [2023-05-19 14:26:50,814] ERROR [SRC.CDC.SONATA.DATA|task-0] There was a problem moving indexes for compactor with file 1 (org.infinispan.persistence.sifs.Compactor:641) java.lang.IllegalStateException: Too many records for this key (short overflow) at org.infinispan.persistence.sifs.IndexNode.copyWith(IndexNode.java:680) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:408) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:402) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:524) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:398) at io.reactivex.rxjava3.internal.subscribers.LambdaSubscriber.onNext(LambdaSubscriber.java:65) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$ObserveOnSubscriber.runAsync(FlowableObserveOn.java:404) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.run(FlowableObserveOn.java:178) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run(ExecutorScheduler.java:324) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.runEager(ExecutorScheduler.java:289) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.run(ExecutorScheduler.java:250) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.base/java.lang.Thread.run(Thread.java:833) ... [2023-05-19 14:31:54,757] ERROR [SRC.CDC.SONATA.DATA|task-0] There was a problem moving indexes for compactor with file 1 (org.infinispan.persistence.sifs.Compactor:641) java.lang.IllegalStateException: Too many records for this key (short overflow) at org.infinispan.persistence.sifs.IndexNode.copyWith(IndexNode.java:680) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:408) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:402) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:524) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:398) at io.reactivex.rxjava3.internal.subscribers.LambdaSubscriber.onNext(LambdaSubscriber.java:65) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$ObserveOnSubscriber.runAsync(FlowableObserveOn.java:404) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.run(FlowableObserveOn.java:178) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run(ExecutorScheduler.java:324) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.runEager(ExecutorScheduler.java:289) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.run(ExecutorScheduler.java:250) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.base/java.lang.Thread.run(Thread.java:833) [2023-05-19 14:32:05,282] ERROR [SRC.CDC.SONATA.DATA|task-0] ISPN000136: Error executing command RemoveCommand on Cache 'transactions', writing keys [0e001b0065ad0200] (org.infinispan.interceptors.impl.InvocationContextInterceptor:126) java.lang.IllegalStateException: Too many records for this key (short overflow) at org.infinispan.persistence.sifs.IndexNode.copyWith(IndexNode.java:680) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:408) at org.infinispan.persistence.sifs.IndexNode.setPosition(IndexNode.java:402) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:524) at org.infinispan.persistence.sifs.Index$Segment.accept(Index.java:398) at io.reactivex.rxjava3.internal.subscribers.LambdaSubscriber.onNext(LambdaSubscriber.java:65) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$ObserveOnSubscriber.runAsync(FlowableObserveOn.java:404) at io.reactivex.rxjava3.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.run(FlowableObserveOn.java:178) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run(ExecutorScheduler.java:324) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.runEager(ExecutorScheduler.java:289) at io.reactivex.rxjava3.internal.schedulers.ExecutorScheduler$ExecutorWorker.run(ExecutorScheduler.java:250) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) at java.base/java.lang.Thread.run(Thread.java:833) [2023-05-19 14:32:05,283] INFO [SRC.CDC.SONATA.DATA|task-0] Shutting down infinispan embedded caches (io.debezium.connector.oracle.logminer.processor.infinispan.EmbeddedInfinispanLogMinerEventProcessor:100)
Could someone help me with it, please?
Please let me know if you need more information.
- relates to
-
ISPN-14514 SIFS results in "Too many records for this key" exception
- Resolved
-
DBZ-6557 Add Infinispan replicated cache support for EmbeddedInfinispanLogMinerEventProcessor
- Closed