Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-3935

Transaction inconsistency during network partitions

    XMLWordPrintable

Details

    • Bug
    • Resolution: Unresolved
    • Critical
    • None
    • RHDG 7.3.6 GA, RHDG 8.0.1 GA
    • Transactions

    Description

      In scenario where the originator stays in minor partition (in our test suite, the originator isolated tests), it is possible to a transaction to be committed and rolled back in the majority partition.

      In Pessimitic Locking, the transaction is committed in one-phase using the PrepareCommand. If the partition happens when the originator sends the PrepareCommand, the nodes in the majority partition may or may not receive it. We can have the case where some nodes receive the PrepareCommand and applied and other don't receive it.

      When the topology is updated in the majority partition, the TransactionTable rollbacks all transaction in which the originator isn't present. So, in the nodes where the PrepareCommand isn't received, the transaction is rolled back.

      The originator in the minory partition detects the partition and marks the transaction partially completed. When the merge occurs, it tries to commit the transaction again. In the nodes where the transaction is rolled back, the transaction is marked as completed and when the PrepareCommand is received, it throws an IllegalStateException (TransactionTable:386, getOrCreateRemoteTransaction()). In this case, the transaction isn't removed from the PartitionHandlingManager and our test suite fails with "there are pending tx".

      Other theoretically scenario is the PrepareCommand to be executed when no locks are acquired.

      The same issue can happen with Optimistic Locking for the CommitCommand.

      The problem is the transaction table can't identify is the node left gracefully or not. A solution would be to have an "expected members" list, ideally separated from the CacheTopology to avoid sending it every time. Also, it would need some sysadmin tools for the case where the node crashes and it won't be back online for a while (or for some reason, it doesn't need to be back online).
      A sysadmin could remove the node from this list (CacheTopology is updated and there is no need to increase it) and decide what to do with the pending transactions (or an automatic mechanism to auto-commit/rollback the transaction).

      Attachments

        Issue Links

          Activity

            People

              pruivo@redhat.com Pedro Ruivo
              rhn-support-wfink Wolf Fink
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated: