-
Feature Request
-
Resolution: Unresolved
-
Major
-
None
-
None
Batch deployments which contain a large number of executed jobs can be extremely slow to process as the /deployment=batch.war/subsystem=batch-jberet processes each job instance then each job execution of that job instance.
One possibly helpful option for the web console would be to add a new description attribute to indicate the resource may be slow to process. The web console might be able to run a background task to populate data rather than locking up the UI. There would still be an issue with a large memory footprint here however.
JBeret might want to consider having a way to archive jobs too rather than just purge them. Some users may want to keep all job execution data. Archiving this data could reduce the size of the current data being retrieved.
- is related to
-
WFLY-15525 JBeret: Make it possible to limit number of records retrieved via a JDBC store
-
- Closed
-
-
HAL-1962 Support execution-records-limit for jdbc-job-repository in subsystem batch-jberet
-
- Resolved
-
-
WFLY-14946 More efficient way of getting batch job executions by job name
-
- Closed
-
- relates to
-
WFLY-15525 JBeret: Make it possible to limit number of records retrieved via a JDBC store
-
- Closed
-
-
JBEAP-30508 [GSS](8.0.z) - WFLY-20773 Large batch repositories can create OutOfMemoryError's when reading the management model
-
- Closed
-
-
JBEAP-30516 (8.1) - WFLY-20773 Large batch repositories can create OutOfMemoryError's when reading the management model
-
- Closed
-
-
WFLY-20773 Large batch repositories can create OutOfMemoryError's when reading the management model
-
- Closed
-