-
Bug
-
Resolution: Done
-
Critical
-
0.10.1
-
2020 Week 25-27 (from Jun 15), 2020 Week 28-30 (from Jul 6), 2020 Week 31-33 (from Jul 27)
When running the process-quarkus-example Kogito App with Infinispan persistence, I have observed the following situation.
When an order process instance is created, it also creates an orderItems subprocess which will create a human task with an assigned work item. Moreover, the parent process instance order is registered as a listener via its LightSignalManager on a SignalManagerHub because it needs to listen for completion of its orderItems subprocess. This means that all process instances are kept in memory even though they are not needed. Moreover, all HumanTaskWorkItemImpl objects, which represent human tasks belonging to the orderItems subprocess, are also kept in memory and are referenced by the LightWorkItemManager. Each HumanTaskWorkItemImpl points to the concrete NodeInstance which in turn points to the ProcessInstance, effectively holding again all process instances in memory. Finally, process instances are also referenced by the DefaultProcessInstanceManager, so they are kept in memory from this point as well.
Another issue is that after hitting /orders endpoint the second time, there are additional 40 000 process instances added in memory even though they had already been loaded by the previous request. This is the reason why the OutOfMemoryError occurs after hitting the same endpoint a couple of times. It just runs out of free memory as more and more process instances are loaded to memory.
After the Kogito App is restarted, it doesn't hold any process instances, but after another /orders request, all processes are loaded to memory again.
I have analyzed this using the Eclipse Memory Analyzer. I have heap dumps obtained at various points of debugging of this behaviour, so I can provide them if needed. But they can be generated pretty quickly by following the Steps to Reproduce.