Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-8232

Transaction inconsistency during network partitions

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Critical Critical
    • None
    • 9.1.0.Final
    • Transactions

      In scenario where the originator stays in minor partition (in our test suite, the originator isolated tests), it is possible to a transaction to be committed and rolled back in the majority partition.

      In Pessimitic Locking, the transaction is committed in one-phase using the PrepareCommand. If the partition happens when the originator sends the PrepareCommand, the nodes in the majority partition may or may not receive it. We can have the case where some nodes receive the PrepareCommand and applied and other don't receive it.

      When the topology is updated in the majority partition, the TransactionTable rollbacks all transaction in which the originator isn't present. So, in the nodes where the PrepareCommand isn't received, the transaction is rolled back.

      The originator in the minory partition detects the partition and marks the transaction partially completed. When the merge occurs, it tries to commit the transaction again. In the nodes where the transaction is rolled back, the transaction is marked as completed and when the PrepareCommand is received, it throws an IllegalStateException (TransactionTable:386, getOrCreateRemoteTransaction()). In this case, the transaction isn't removed from the PartitionHandlingManager and our test suite fails with "there are pending tx".

      Other theoretically scenario is the PrepareCommand to be executed when no locks are acquired.

      The same issue can happen with Optimistic Locking for the CommitCommand.

      The problem is the transaction table can't identify is the node left gracefully or not. A solution would be to have an "expected members" list, ideally separated from the CacheTopology to avoid sending it every time. Also, it would need some sysadmin tools for the case where the node crashes and it won't be back online for a while (or for some reason, it doesn't need to be back online).
      A sysadmin could remove the node from this list (CacheTopology is updated and there is no need to increase it) and decide what to do with the pending transactions (or an automatic mechanism to auto-commit/rollback the transaction).

              pruivo@redhat.com Pedro Ruivo
              pruivo@redhat.com Pedro Ruivo
              Archiver:
              rhn-support-adongare Amol Dongare

                Created:
                Updated:
                Resolved:
                Archived: