Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-22033

[GSS](7.4.z) Sessions do not expire in cluster after coordinator is killed

XMLWordPrintable

      Follow the steps in reproducer.zip README.md and verify everything is fine on one node. Lets change the reproducer to occure on 2 nodes cluster.

      cp -a standalone node-1
      cp -a standalone node-2
      ./bin/standalone.sh -c standalone-ha.xml -Djboss.server.base.dir=`pwd`/node-1 -Djboss.node.name=host1
      ./bin/standalone.sh -c standalone-ha.xml -Djboss.server.base.dir=`pwd`/node-2 -Djboss.node.name=host2 -Djboss.socket.binding.port-offset=100
      
      #cluster was formed
      #08:20:57,753 INFO [org.infinispan.CLUSTER] (MSC service thread 1-8) ISPN000094: Received new cluster view for channel ejb: [host1|1] (2) [host1, host2]
      
      # Generate the load. It will acces port 8080, thus node-1
      ./jmeter.sh -n -t /path/to/reproducer/jmeter-testcase.jmx
      
      # Kill the node-1 by Ctrl + C in node-1 terminal.
      # You don't need to wait for previous step to be finished.
      

      You can see on node-2 terminal sessions were rescheduled correctly

      08:21:48,681 TRACE [org.wildfly.clustering.web.infinispan] (InfinispanSessionManager - 1) Session tYYEQST88BzExZLsCGdmBe9qdmwtPKRA6G86AziT will expire in 59719 ms
      

      And seems expired as well correctly

      08:22:48,400 TRACE [org.wildfly.clustering.web.infinispan] (SessionExpirationScheduler - 1) Expiring session tYYEQST88BzExZLsCGdmBe9qdmwtPKRA6G86AziT
       08:22:48,402 TRACE [org.wildfly.clustering.web.infinispan] (SessionExpirationScheduler - 1) Session tYYEQST88BzExZLsCGdmBe9qdmwtPKRA6G86AziT has expired.
       08:22:48,403 TRACE [org.wildfly.clustering.web.infinispan] (SessionExpirationScheduler - 1) Session tYYEQST88BzExZLsCGdmBe9qdmwtPKRA6G86AziT will be removed
      

      However when verifying from other sources sessions seems still there

      ./jmap-histo-check.sh <java_pid> stabilised on output

      Mon Jun 7 08:46:34 AM CEST 2021
      PID: 27682
      # before GC --------------------------
       num #instances #bytes class name (module)
       16: 16000 384000 org.wildfly.clustering.web.infinispan.session.fine.SessionAttributeKey
       781: 32 512 org.wildfly.clustering.web.infinispan.session.SessionAccessMetaDataKey
       782: 32 512 org.wildfly.clustering.web.infinispan.session.SessionCreationMetaDataKey
       783: 32 512 org.wildfly.clustering.web.infinispan.session.fine.SessionAttributeNamesKey
      --------------------------------------
      # trigger GC to clean up for testing purpose 
      27682:
      Command executed successfully
      # after GC ---------------------------
       num #instances #bytes class name (module)
       16: 16000 384000 org.wildfly.clustering.web.infinispan.session.fine.SessionAttributeKey
       754: 32 512 org.wildfly.clustering.web.infinispan.session.SessionAccessMetaDataKey
       755: 32 512 org.wildfly.clustering.web.infinispan.session.SessionCreationMetaDataKey
       756: 32 512 org.wildfly.clustering.web.infinispan.session.fine.SessionAttributeNamesKey
      --------------------------------------
      

      That list should be empty after all sessions expire succesfully.

      Also keep getting 32 active sessions. Sessions does not invalidate. It should be zero.

      http localhost:10090/metrics | grep jboss_undertow_active_sessions
      # HELP jboss_undertow_active_sessions Number of active sessions
      # TYPE jboss_undertow_active_sessions gauge
      jboss_undertow_active_sessions\{deployment="hello.war",subdeployment="hello.war",microprofile_scope="vendor"} 32.0
      

        1. node-1.log
          60 kB
        2. node-2.hprof.zip
          24.22 MB
        3. node-2.log
          58 kB
        4. reproducer.zip
          52 kB
        5. third_failure.zip
          21.39 MB

            pferraro@redhat.com Paul Ferraro
            mchoma@redhat.com Martin Choma
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: