Uploaded image for project: 'jBPM'
  1. jBPM
  2. JBPM-10100

[GSS] (7.12.0) Select from PROCESSINSTANCELOG takes too long

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 7.74.0.Final
    • None
    • None

      This query is executed in 20+ sessions, lasting hours and hours:

      SELECT PROCESSINSTANCEID, PROCESSID, START_DATE, END_DATE, STATUS, PARENTPROCESSINSTANCEID, OUTCOME, DURATION,
             USER_IDENTITY, PROCESSVERSION, PROCESSNAME, CORRELATIONKEY, EXTERNALID, PROCESSINSTANCEDESCRIPTION, SLA_DUE_DATE,
             SLACOMPLIANCE, LASTMODIFICATIONDATE, ERRORCOUNT
      FROM
          ( SELECT LOG.PROCESSINSTANCEID, LOG.PROCESSID, LOG.START_DATE, LOG.END_DATE, LOG.STATUS, LOG.PARENTPROCESSINSTANCEID,
                   LOG.OUTCOME, LOG.DURATION, LOG.USER_IDENTITY, LOG.PROCESSVERSION, LOG.PROCESSNAME, LOG.CORRELATIONKEY, LOG.EXTERNALID,
                   LOG.PROCESSINSTANCEDESCRIPTION, LOG.SLA_DUE_DATE, LOG.SLACOMPLIANCE, COALESCE ( INFO.LASTMODIFICATIONDATE,
                   LOG.END_DATE ) AS LASTMODIFICATIONDATE,
                   (
                      SELECT COUNT ( ERRINFO.ID )
                      FROM EXECUTIONERRORINFO ERRINFO
                      WHERE ERRINFO.PROCESS_INST_ID=LOG.PROCESSINSTANCEID
                      AND ERRINFO.ERROR_ACK=0
                  ) AS ERRORCOUNT
            FROM PROCESSINSTANCELOG LOG
            LEFT JOIN PROCESSINSTANCEINFO INFO ON INFO.INSTANCEID=LOG.PROCESSINSTANCEID
          ) "dbSQL"
      WHERE ERRORCOUNT > 0.0
      ORDER BY START_DATE DESC FETCH FIRST 10 ROWS ONLY

       

      This could be because the resultset is very big (there are millions of records in the queried tables). This consumes a lot of DB resources and slows down the entire application. Those queries have to be killed (automatically when they are seen), since they take too long and consume too much resources on a Database level. Even though the content of the database is cleaned up through the LogCleanupCommand, the content is still huge.

      The problem really is with the way the query has been written. The table PROCESSINSTANCELOG contains 56M rows at this time and the obsolete rows are removed every week. With 56M rows, one execution of the query runs during more than an hour. So, even if they would delete 50% of the rows (which they can't ...), then the query would still take way too long.

            rhn-support-egonzale Enrique Gonzalez Martinez (Inactive)
            rhn-support-egonzale Enrique Gonzalez Martinez (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: