Uploaded image for project: 'JBoss Core Services'
  1. JBoss Core Services
  2. JBCS-248

Garbage collection performance degrades as database file size increases

XMLWordPrintable

    • Release Notes
    • DR3
    • Hide

      Make sure you do not have `SecCollectionTimeout` configured so the issue can be reproduced quickly.

      # access_log format
      LogFormat "%t \"%r\" %>s %b %{SESSIONID}C Time Taken: %D" combined
      
      <IfModule mod_security2.c>
          SecRuleEngine On
          SecDebugLog logs/modsec_debug.log
          SecDebugLogLevel 9
          SecCookieFormat 0
          SecTmpDir /dev/shm/mod_security/
          SecDataDir /dev/shm/mod_security/
      
          SecRule REQUEST_COOKIES:SESSIONID !^$  phase:1,id:118,nolog,pass,setsid:%{REQUEST_COOKIES.SESSIONID}
          SecAction phase:1,id:119,nolog,pass,setvar:SESSION.my_counter=+1
      </IfModule>
      

      Here is the "client" script:

      #!/bin/bash
      for i in `seq 1 10000`;do
          curl -b "SESSIONID=test$i" localhost/index.html &>/dev/null
      done
      

      To reproduce, start httpd with the configuration above, run the client script, and then observe the access_log and mod_security debug logs to see the issue happening.

      Here is a snippet from access_log showing when the slowness starts to happen:

      [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2736 Time Taken: 1512
      [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2737 Time Taken: 67055
      [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2738 Time Taken: 121765
      [....]
      [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8480 Time Taken: 1789
      [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8481 Time Taken: 213163
      [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8482 Time Taken: 435829
      [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8483 Time Taken: 428476
      

      Finally, here is an example of the mod_security debug logging showing garbage collection duration for a long-running request:

      [14/Dec/2016:08:38:53 --0500] [localhost/sid#7f0be55d13b0][rid#7f0be569fce0][/index.html][4] Garbage collection took 228151 microseconds.
      
      Show
      Make sure you do not have `SecCollectionTimeout` configured so the issue can be reproduced quickly. # access_log format LogFormat "%t \" %r\ " %>s %b %{SESSIONID}C Time Taken: %D" combined <IfModule mod_security2.c> SecRuleEngine On SecDebugLog logs/modsec_debug.log SecDebugLogLevel 9 SecCookieFormat 0 SecTmpDir /dev/shm/mod_security/ SecDataDir /dev/shm/mod_security/ SecRule REQUEST_COOKIES:SESSIONID !^$ phase:1,id:118,nolog,pass,setsid:%{REQUEST_COOKIES.SESSIONID} SecAction phase:1,id:119,nolog,pass,setvar:SESSION.my_counter=+1 </IfModule> Here is the "client" script: #!/bin/bash for i in `seq 1 10000`; do curl -b "SESSIONID=test$i" localhost/index.html &>/dev/ null done To reproduce, start httpd with the configuration above, run the client script, and then observe the access_log and mod_security debug logs to see the issue happening. Here is a snippet from access_log showing when the slowness starts to happen: [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2736 Time Taken: 1512 [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2737 Time Taken: 67055 [14/Dec/2016:08:38:03 -0500] "GET /index.html HTTP/1.1" 200 test2738 Time Taken: 121765 [....] [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8480 Time Taken: 1789 [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8481 Time Taken: 213163 [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8482 Time Taken: 435829 [14/Dec/2016:08:38:53 -0500] "GET /index.html HTTP/1.1" 200 6 test8483 Time Taken: 428476 Finally, here is an example of the mod_security debug logging showing garbage collection duration for a long-running request: [14/Dec/2016:08:38:53 --0500] [localhost/sid#7f0be55d13b0][rid#7f0be569fce0][/index.html][4] Garbage collection took 228151 microseconds.

      As the database file size increases, garbage collection duration increases and causes request slowness. Here is a configuration that reproduces the issue and a script to generate requests to bloat the database size. At the end of our 10,000 request test, the database size was only 49M.

      This has been reported upstream: https://github.com/SpiderLabs/ModSecurity/issues/1286

              gzaronik@redhat.com George Zaronikas
              rhn-support-rbost Robert Bost
              Jan Onderka Jan Onderka
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: