Uploaded image for project: 'WildFly'
  1. WildFly
  2. WFLY-18384

[CLUSTERING] File containing session data is never shrunk or deleted

    • Hide

      Create a cluster of 4 nodes:

      rm -rdf server1
      unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip
      mv server server1
      rm -rdf server2
      unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip
      mv server server2
      rm -rdf server3
      unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip
      mv server server3
      rm -rdf server4
      unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip
      mv server server4
      

      Configure the 4 nodes to persist session data in a file that should be purged:

      cat <<EOF > $PWD/test.cli
      embed-server --server-config=standalone-ha.xml
      if (outcome != success) of /subsystem=jgroups:read-attribute(name=default-stack)
      /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)
      else
      /subsystem=jgroups:write-attribute(name=default-stack,value=tcp)
      /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)
      end-if
      batch
      /subsystem=infinispan/cache-container=web/distributed-cache=dist:remove
      /subsystem=infinispan/cache-container=ejb/distributed-cache=dist:remove
      # web cache
      /subsystem=infinispan/cache-container=web/replicated-cache=dist:add()
      /subsystem=infinispan/cache-container=web/replicated-cache=dist/component=locking:add(isolation=REPEATABLE_READ)
      /subsystem=infinispan/cache-container=web/replicated-cache=dist/component=transaction:add(mode=BATCH)
      /subsystem=infinispan/cache-container=web/replicated-cache=dist/store=file:add(purge=true, passivation=true)
      /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache, value=dist)
      # ejb cache
      /subsystem=infinispan/cache-container=ejb/replicated-cache=dist:add()
      /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/component=locking:add(isolation=REPEATABLE_READ)
      /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/component=transaction:add(mode=BATCH)
      /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/store=file:add(purge=true, passivation=true)
      /subsystem=infinispan/cache-container=ejb:write-attribute(name=default-cache, value=dist)
      run-batch
      # session timeout after 1 minute
      /subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout, value=1)
      EOF
      
      ./server1/bin/jboss-cli.sh --file=$PWD/test.cli
      ./server2/bin/jboss-cli.sh --file=$PWD/test.cli
      ./server3/bin/jboss-cli.sh --file=$PWD/test.cli
      ./server4/bin/jboss-cli.sh --file=$PWD/test.cli
      

      Deploy an application that is supposed to persist session data: clusterbench-ee10.ear:

      CLUSTERBENCH_EAR=clusterbench-ee10.ear
      cp $CLUSTERBENCH_EAR ./server1/standalone/deployments/
      cp $CLUSTERBENCH_EAR ./server2/standalone/deployments/
      cp $CLUSTERBENCH_EAR ./server3/standalone/deployments/
      cp $CLUSTERBENCH_EAR ./server4/standalone/deployments/
      

      start the 4 nodes in separate shells:

      ./server1/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=wildfly1
      
      ./server2/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=wildfly2
      
      ./server3/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=300 -Djboss.node.name=wildfly3
      
      ./server4/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=400 -Djboss.node.name=wildfly4
      

      Create enough session in order to get e.g. 3 store files created; if you are familiar with JMeter and you deployed clusterbench-ee10.ear, you can use jmeter.jmx; open jmeter.jmx with JMeter 5.5 GUI, and press the start button, wait until all 2000 sessions are created, stop end repeat enough times to get the 3 store files created:

      Alternatively, you can run a script like this:

      for i in {1..20}
      do
       echo "JMeter Run $i ..."
      apache-jmeter-5.5/bin/jmeter -n -t jmeter.jmx
      done
      

      At this point you should have the following files:

      $ ls -ltr server1/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data
      total 32872
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1
      -rw-r--r--. 1 tborgato tborgato   105192 Sep 28 18:22 ispn12.2
      
      $ ls -ltr server2/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data
      total 32872
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1
      -rw-r--r--. 1 tborgato tborgato   105192 Sep 28 18:22 ispn12.2
      
      $ ls -ltr server3/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data
      total 32872
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1
      -rw-r--r--. 1 tborgato tborgato   105192 Sep 28 18:22 ispn12.2
      
      $ ls -ltr server4/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data
      total 32872
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0
      -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1
      -rw-r--r--. 1 tborgato tborgato   105192 Sep 28 18:22 ispn12.2
      

      Wait until sessions expire (default should be 30 minutes but this reproduces set session timeout to 1 minute - I have waited more than 1 hour to be sure): you will see that none of these files is nor shrink in size or deleted;

      Note that if you stop and restart a node the file is automatically deleted;

      Show
      Create a cluster of 4 nodes: rm -rdf server1 unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip mv server server1 rm -rdf server2 unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip mv server server2 rm -rdf server3 unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip mv server server3 rm -rdf server4 unzip -q wildfly-30.0.0.Beta1-202308192044-7e816de9.zip mv server server4 Configure the 4 nodes to persist session data in a file that should be purged: cat <<EOF > $PWD/test.cli embed-server --server-config=standalone-ha.xml if (outcome != success) of /subsystem=jgroups:read-attribute(name=default-stack) /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp) else /subsystem=jgroups:write-attribute(name=default-stack,value=tcp) /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp) end-if batch /subsystem=infinispan/cache-container=web/distributed-cache=dist:remove /subsystem=infinispan/cache-container=ejb/distributed-cache=dist:remove # web cache /subsystem=infinispan/cache-container=web/replicated-cache=dist:add() /subsystem=infinispan/cache-container=web/replicated-cache=dist/component=locking:add(isolation=REPEATABLE_READ) /subsystem=infinispan/cache-container=web/replicated-cache=dist/component=transaction:add(mode=BATCH) /subsystem=infinispan/cache-container=web/replicated-cache=dist/store=file:add(purge=true, passivation=true) /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache, value=dist) # ejb cache /subsystem=infinispan/cache-container=ejb/replicated-cache=dist:add() /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/component=locking:add(isolation=REPEATABLE_READ) /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/component=transaction:add(mode=BATCH) /subsystem=infinispan/cache-container=ejb/replicated-cache=dist/store=file:add(purge=true, passivation=true) /subsystem=infinispan/cache-container=ejb:write-attribute(name=default-cache, value=dist) run-batch # session timeout after 1 minute /subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout, value=1) EOF ./server1/bin/jboss-cli.sh --file=$PWD/test.cli ./server2/bin/jboss-cli.sh --file=$PWD/test.cli ./server3/bin/jboss-cli.sh --file=$PWD/test.cli ./server4/bin/jboss-cli.sh --file=$PWD/test.cli Deploy an application that is supposed to persist session data: clusterbench-ee10.ear : CLUSTERBENCH_EAR=clusterbench-ee10.ear cp $CLUSTERBENCH_EAR ./server1/standalone/deployments/ cp $CLUSTERBENCH_EAR ./server2/standalone/deployments/ cp $CLUSTERBENCH_EAR ./server3/standalone/deployments/ cp $CLUSTERBENCH_EAR ./server4/standalone/deployments/ start the 4 nodes in separate shells: ./server1/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=wildfly1 ./server2/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=wildfly2 ./server3/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=300 -Djboss.node.name=wildfly3 ./server4/bin/standalone.sh --server-config=standalone-ha.xml -Djboss.socket.binding.port-offset=400 -Djboss.node.name=wildfly4 Create enough session in order to get e.g. 3 store files created; if you are familiar with JMeter and you deployed clusterbench-ee10.ear , you can use jmeter.jmx ; open jmeter.jmx with JMeter 5.5 GUI, and press the start button, wait until all 2000 sessions are created, stop end repeat enough times to get the 3 store files created: Alternatively, you can run a script like this: for i in {1..20} do echo "JMeter Run $i ..." apache-jmeter-5.5/bin/jmeter -n -t jmeter.jmx done At this point you should have the following files: $ ls -ltr server1/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data total 32872 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1 -rw-r--r--. 1 tborgato tborgato 105192 Sep 28 18:22 ispn12.2 $ ls -ltr server2/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data total 32872 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1 -rw-r--r--. 1 tborgato tborgato 105192 Sep 28 18:22 ispn12.2 $ ls -ltr server3/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data total 32872 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1 -rw-r--r--. 1 tborgato tborgato 105192 Sep 28 18:22 ispn12.2 $ ls -ltr server4/standalone/data/infinispan/web/clusterbench-ee10.ear.clusterbench-ee10-web.war/data total 32872 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:12 ispn12.0 -rw-r--r--. 1 tborgato tborgato 16777152 Sep 28 18:22 ispn12.1 -rw-r--r--. 1 tborgato tborgato 105192 Sep 28 18:22 ispn12.2 Wait until sessions expire (default should be 30 minutes but this reproduces set session timeout to 1 minute - I have waited more than 1 hour to be sure): you will see that none of these files is nor shrink in size or deleted; Note that if you stop and restart a node the file is automatically deleted;
    • ---
    • ---

      Session data can be stored on file system on a file located in standalone/data/infinispan/web/<DEPLOYMENT_NAME>/data;

      When a session expires, this file should be purged and shrink in size;

      This doesn't happen anymore;

        1. clusterbench-ee10.ear
          44 kB
        2. jmeter.jmx
          13 kB
        3. screenshot-1.png
          screenshot-1.png
          127 kB
        4. screenshot-2.png
          screenshot-2.png
          266 kB
        5. Screenshot from 2023-08-25 16-18-37.png
          Screenshot from 2023-08-25 16-18-37.png
          69 kB

            [WFLY-18384] [CLUSTERING] File containing session data is never shrunk or deleted

            rhn-engineering-rhusar I also tried with the default cache configuration (session timeout still set to 1 minute - waited 1 hour anyway) and the result is the same: also after more than one hour after the sessions are not accessed, the files do not get deleted or modified in any way:

            Tommaso Borgato added a comment - rhn-engineering-rhusar I also tried with the default cache configuration (session timeout still set to 1 minute - waited 1 hour anyway) and the result is the same: also after more than one hour after the sessions are not accessed, the files do not get deleted or modified in any way:

            Tommaso Borgato added a comment - - edited

            rhn-engineering-rhusar Yes I did: I got to the point where 4 files were created on each node, session timeout was 1 minute and after more than one hour neither of the 4 files in none of the 4 nodes was shrunk or deleted (the older files have 0% usage):

            Tommaso Borgato added a comment - - edited rhn-engineering-rhusar Yes I did: I got to the point where 4 files were created on each node, session timeout was 1 minute and after more than one hour neither of the 4 files in none of the 4 nodes was shrunk or deleted (the older files have 0% usage):

            tborgato@redhat.com Given that the test is updated, were you able to verify the behavior with the updated test? Thanks.

            Radoslav Husar added a comment - tborgato@redhat.com Given that the test is updated, were you able to verify the behavior with the updated test? Thanks.

            rhn-engineering-rhusar tnks for pointing out the correct expected behavior;
            I have updated the reproduced accordingly;

            Tommaso Borgato added a comment - rhn-engineering-rhusar tnks for pointing out the correct expected behavior; I have updated the reproduced accordingly;

            Tommaso Borgato added a comment - - edited

            tborgato@redhat.com needs to update the reproducer accordingly to https://infinispan.org/docs/stable/titles/configuring/configuring.html#file-stores_persistence

            When append-only files:

            Reach their maximum size, Infinispan creates a new file and starts writing to it.

            Reach the compaction threshold of less than 50% usage, Infinispan overwrites the entries to a new file and then deletes the old file.

            Tommaso Borgato added a comment - - edited tborgato@redhat.com needs to update the reproducer accordingly to https://infinispan.org/docs/stable/titles/configuring/configuring.html#file-stores_persistence When append-only files: Reach their maximum size, Infinispan creates a new file and starts writing to it. Reach the compaction threshold of less than 50% usage, Infinispan overwrites the entries to a new file and then deletes the old file.

            Just adding a note here that this has been discussed and that my understanding of the test and Infinispan implementation that in this scenario, the shrinking of the 'append only' files used here will not be triggered by the test conditions. The conditions for compacting are described on https://infinispan.org/docs/stable/titles/configuring/configuring.html#file-stores_persistence - so perhaps the test needs to be updated and eventually the issue rejected.

            Downgrading to critical to be in sync with JBEAP issue.

            Radoslav Husar added a comment - Just adding a note here that this has been discussed and that my understanding of the test and Infinispan implementation that in this scenario, the shrinking of the 'append only' files used here will not be triggered by the test conditions. The conditions for compacting are described on https://infinispan.org/docs/stable/titles/configuring/configuring.html#file-stores_persistence - so perhaps the test needs to be updated and eventually the issue rejected. Downgrading to critical to be in sync with JBEAP issue.

              rhn-engineering-rhusar Radoslav Husar
              tborgato@redhat.com Tommaso Borgato
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: