-
Bug
-
Resolution: Won't Do
-
Major
-
None
-
None
-
None
-
-
-
---
-
---
The batch-jberet subsystem adds resources to a deployment for each job defined in a job XML file descriptor. If there are many descriptors or descriptors have a large quantity of job executions associated with them, reading the resource through a management operation may result in an OutOfMemoryError.
The issue is caused by the org.wildfly.extension.batch.jberet.deployment.BatchJobExecutionResource which is registers each job execution for a given job. This method is the issue:
/**
* Note the access to the {@link #children} is <strong>not</strong> guarded here and needs to be externally
* guarded.
*/
private void refreshChildren() {
if (System.currentTimeMillis() - lastRefreshedTime < refreshMinInterval) {
return;
}
final List<Long> executionIds = jobOperator.getJobExecutionsByJob(jobName);
final Set<String> asNames = executionIds.stream().map(Object::toString).collect(Collectors.toSet());
children.clear();
children.addAll(asNames);
lastRefreshedTime = System.currentTimeMillis();
}
There is an attempt made not to refresh too fast introduced in WFLY-14275. However, this can still cause issues with very large repositories.
A possible solution is to limit the number of jobs returned by the resource.
- is cloned by
-
JBEAP-30508 [GSS](8.0.z) - WFLY-20773 Large batch repositories can create OutOfMemoryError's when reading the management model
-
- Closed
-
- is related to
-
WFLY-15525 JBeret: Make it possible to limit number of records retrieved via a JDBC store
-
- Closed
-
-
WFLY-7418 Batch deployments with a large number of executed jobs can lock up or slow down the web console
-
- Open
-
-
WFLY-14275 Large job repository is blocking deployment
-
- Closed
-
-
JBEAP-30516 (8.1) - WFLY-20773 Large batch repositories can create OutOfMemoryError's when reading the management model
-
- Closed
-