Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-15386

Memory leak when deployment is redeployed multiple times

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • 7.2.0.CD14
    • 7.2.0.Beta
    • MP OpenTracing
    • None

      It looks like redeploying attached deployment memoryLeaks.war multiple times (our automated test does it 100 times), there seems to be some memory leak in the WildFly. Using git bisect, the commit that started to fail is 651dc8a2f0d4ccad6a35f7b1d28626b43ebb2a5d in WildFly repository. That is why I selected MP OpenTracing component for this issue. Actual testing deployment is very simple and contains just some jsp file with some EJB classes that are used to call GC and check current heap usage.

      Brief description of the test:

      1. attached deployment is deployed
      2. System.gc() is called via EJB call
      3. it checks initial heap usage at the beginning of the test via EJB call that checks all instances of ManagementFactory.getMemoryPoolMXBeans()
      4. deployment is redeployed 100 times
      5. calls System.gc() via EJP call again and checks heap usage again via EJB call and compares the results

      Size of the extra heap in use is about 25MB plus when compared to the initial size.

      I tried to check manually via visualVM tool. You can create a heapdump at the beginning of the test and at the end of the test. Then compare them easily and check the differences. Using a filter I can see quite a huge increase of instances of the org.jboss.modules.* classes. However, biggest memory footprint is huge increase of the instances of the java.util.HashMap$Node, char[], java.util.HashMap$Node[] and few others. I don't see such huge increase of the instances of these classes when I use older build of WildFly prior to the problematic commit. To be honest I don't see direct link with the problematic commit I linked before. As the heapdumps are too big to be attached here, I am attaching some screenshots.

      Also, when microprofile-opentracing-smallrye subsystem is removed via:

      /subsystem=microprofile-opentracing-smallrye:remove()
      

      then no more memory leak is detected, after adding the subsystem back, the leak is present again. Thus opentracing is involved somehow into this indeed.

        1. memoryLeaks.war
          4 kB
          Jan Stourac
        2. problematic-wf-build-heap-diff-start-end.png
          155 kB
          Jan Stourac
        3. problematic-wf-build-heap-diff-start-end-org-filter.png
          175 kB
          Jan Stourac
        4. sane-wf-build-heap-diff-start-end.png
          152 kB
          Jan Stourac
        5. sane-wf-build-heap-diff-start-end-org-filter.png
          194 kB
          Jan Stourac

              jpkroehling@redhat.com Juraci Paixão Kröhling (Inactive)
              jstourac@redhat.com Jan Stourac
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: