Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-26160

[CLUSTERING] Intermittent "IJ031013: Interrupted attempting lock" with mssql DB

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 8.0.0.Beta-CR1, 8.0.0.GA-CR2
    • Clustering
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Known Issue

      Scenario: fail-over scenario where we have the usual 4 nodes cluster where each node is configured to store session data into an invalidation-cache backed by a MS SQLServer 2008 Database:

      embed-server --server-config=standalone-ha.xml
      /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload:add()
      data-source add --name=testDS --jndi-name=java:jboss/datasources/testDS --driver-name=mssql-connector.jar --connection-url=jdbc:sqlserver://mssql-2022.mwqe.psi.rdu2.redhat.com:1433;DatabaseName=dballo19 --enabled=true --jta=true --use-java-context=true --transaction-isolation=TRANSACTION_READ_COMMITTED --min-pool-size=10 --max-pool-size=100 --pool-prefill=true --user-name=dballo19 --password=dballo19
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc:add(data-source=testDS,fetch-state=false,passivation=false,purge=false,shared=true,dialect=SQL_SERVER){allow-resource-service-restart=true}
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=id-column.name,value=id)
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=data-column.name,value=datum)
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=timestamp-column.name,value=version)
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=id-column.type,value=VARCHAR(255))
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=timestamp-column.type,value=BIGINT)
      /subsystem=infinispan/cache-container=web/invalidation-cache=offload/store=jdbc/table=string:write-attribute(name=data-column.type,value=VARBINARY(5000))
      /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache, value=offload)
      /subsystem=transactions:write-attribute(name=node-identifier,value=wildfly1)
      

      When a nodes leaves the cluster, intermittently, we observed the following error on other nodes (in this case node4 left the cluster and the error pops ut on node1):

      2023-11-30 00:05:16,909 INFO  [org.infinispan.LIFECYCLE] (thread-31,ejb,wildfly1) [Context=cbnc.ear/clusterbench-ejb.jar/default] ISPN100002: Starting rebalance with members [wildfly1, wildfly2, wildfly3], phase READ_OLD_WRITE_ALL, topology id 48
      2023-11-30 00:05:16,918 ERROR [org.infinispan.PERSISTENCE] (SchedulerTopologyChangeListener - 2) ISPN008032: Error during rollback of JDBC transaction: java.sql.SQLException: IJ031013: Interrupted attempting lock: org.jboss.jca.adapters.jdbc.local.LocalManagedConnection@26535660
      	at org.jboss.ironjacamar.jdbcadapters@3.0.2.Final-redhat-00002//org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.tryLock(BaseWrapperManagedConnection.java:432)
      	at org.jboss.ironjacamar.jdbcadapters@3.0.2.Final-redhat-00002//org.jboss.jca.adapters.jdbc.WrappedConnection.lock(WrappedConnection.java:165)
      	at org.jboss.ironjacamar.jdbcadapters@3.0.2.Final-redhat-00002//org.jboss.jca.adapters.jdbc.WrappedConnection.rollback(WrappedConnection.java:915)
      	at org.infinispan.persistence.jdbc@14.0.17.Final-redhat-00002//org.infinispan.persistence.jdbc.common.sql.BaseTableOperations$FlowableConnection.close(BaseTableOperations.java:277)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableUsing$UsingSubscriber.disposeResource(FlowableUsing.java:172)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableUsing$UsingSubscriber.onError(FlowableUsing.java:127)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableDoFinally$DoFinallySubscriber.onError(FlowableDoFinally.java:90)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.subscriptions.EmptySubscription.error(EmptySubscription.java:55)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableFromIterable.subscribe(FlowableFromIterable.java:56)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableFromIterable.subscribeActual(FlowableFromIterable.java:47)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableDoFinally.subscribeActual(FlowableDoFinally.java:47)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:15959)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableUsing.subscribeActual(FlowableUsing.java:73)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableMap.subscribeActual(FlowableMap.java:38)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:15959)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableUsing.subscribeActual(FlowableUsing.java:73)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:15959)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableConcatArray$ConcatArraySubscriber.onComplete(FlowableConcatArray.java:141)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableConcatArray.subscribeActual(FlowableConcatArray.java:41)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableMap.subscribeActual(FlowableMap.java:36)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableFilter.subscribeActual(FlowableFilter.java:38)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.FlowableOnErrorNext.subscribeActual(FlowableOnErrorNext.java:39)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.core.Flowable.subscribe(Flowable.java:16013)
      	at io.reactivex.rxjava3.rxjava@3.1.6.redhat-00001//io.reactivex.rxjava3.internal.operators.flowable.BlockingFlowableIterable.iterator(BlockingFlowableIterable.java:42)
      	at org.infinispan.commons@14.0.17.Final-redhat-00002//org.infinispan.commons.util.Closeables.iterator(Closeables.java:238)
      	at org.infinispan@14.0.17.Final-redhat-00002//org.infinispan.stream.impl.DistributedCacheStream.iterator(DistributedCacheStream.java:370)
      	at org.infinispan@14.0.17.Final-redhat-00002//org.infinispan.util.AbstractDelegatingCacheStream.iterator(AbstractDelegatingCacheStream.java:182)
      	at org.wildfly.clustering.ee.infinispan@8.0.0.GA-redhat-00010//org.wildfly.clustering.ee.infinispan.scheduler.ScheduleLocalKeysTask.accept(ScheduleLocalKeysTask.java:61)
      	at org.wildfly.clustering.ee.infinispan@8.0.0.GA-redhat-00010//org.wildfly.clustering.ee.infinispan.scheduler.ScheduleLocalKeysTask.accept(ScheduleLocalKeysTask.java:42)
      	at org.wildfly.clustering.ee.infinispan@8.0.0.GA-redhat-00010//org.wildfly.clustering.ee.infinispan.scheduler.SchedulerTopologyChangeListener.lambda$topologyChanged$2(SchedulerTopologyChangeListener.java:121)
      	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
      	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
      	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
      	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
      	at org.wildfly.clustering.context@8.0.0.GA-redhat-00010//org.wildfly.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
      	at org.wildfly.clustering.context@8.0.0.GA-redhat-00010//org.wildfly.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:78)
      	at java.base/java.lang.Thread.run(Thread.java:833)
      	at org.jboss.threads@2.4.0.Final-redhat-00001//org.jboss.threads.JBossThread.run(JBossThread.java:513)
      

      Complete logs in eap-8.x-clustering-db-session-shutdown-repl-mssql#99.zip

              Unassigned Unassigned
              tborgato@redhat.com Tommaso Borgato
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated: