I've found a race condition that can play out like below any time a non-core thread above minSpareThreads is idling out. This can result in a request to be improperly enqueued like so:
1. The thread pool has 11 total threads, 10 threads are busy and 1 idle. After the idle timeout (default 60 seconds), the idle thread pool member ends its ThreadPoolExecutor.getTask()/workQueue.poll() wait on the task queue. It hasn't yet called ThreadPoolExecutor.compareAndDecrementWorkerCount()/ThreadPoolExecutor.processWorkerExit() to decrement the worker count and remove itself from the executor worker map.
2. A new connection comes in and is handed off to the executor, which calls TaskQueue.offer. Since the idle thread hasn't removed itself from the map, parent.getPoolSize() still returns 11. Thus, it passes this if check and so it is enqueued:
https://github.com/apache/tomcat/blob/9.0.x/java/org/apache/tomcat/util/threads/TaskQueue.java#L87
3. The idle thread then finishes exiting and removes itself from the executor. The executor does not inherently replace that thread under this condition. So there are now 10 busy threads in the pool and no idle thread available to process the new enqueued request so it sits in the queue until one of the other threads becomes idle. This could then typically be a small imperceptible delay by the time another thread is created or returns idle. But worst case, a very large unexpected delay is induced on the new request depending upon the run time of the current 10 busy tasks.
- is incorporated by
-
JWS-2334 Rebase Tomcat to version 9.0.62
- Closed
- links to