Uploaded image for project: 'JBoss Enterprise Application Platform 4 and 5'
  1. JBoss Enterprise Application Platform 4 and 5
  2. JBPAPP-6170

Infinite Delivery Records on the Journal in case of failed deliveries

XMLWordPrintable

      Hi Clebert,

      I hit OutOfMemory exception during server startup. It seems that HornetQ loads records from journal file into memory and there are no limits for it. This situation should not happen. Customer will have serious issue if he hits this issue and will not be able to manage data.
      I am putting this issue to critical priority because I do not have reproducer for it now and journal file comes from automated test which was executed with invalid configuration. But server should be able to deal with this situation and it should not end with OOM.

      I have hit this issue with journal file from automated test when I had bad configuration of JCA adapter and I had two servers in cluster. I will try to prepare reproducer but I was able to extract following information from journal:

      1. Bindings records look good
      2. Messages records look good (start of journal, expected number of records)
        And there are huge amount of following records:
        ...
        operation@Update,recordID=27;userRecordType=34;isUpdate=true;DeliveryCountUpdateEncoding [queueID=5, count=1]
        operation@Update,recordID=29;userRecordType=34;isUpdate=true;DeliveryCountUpdateEncoding [queueID=5, count=0]
        operation@Update,recordID=31;userRecordType=34;isUpdate=true;DeliveryCountUpdateEncoding [queueID=5, count=0]
        ...

      It seems that this type of record (DeliveryCountUpdateEncoding) is not removed from list of journal records like is done for transaction records which is hold in load method.

      Screen shots from Jprofiler will be attached in several minutes. This data will give you information where can be problem located. It seems that loading of the journal records into memory is not limited. Problem probably lays in org.hornetq.core.journal.impl.JournalImpl.load. See hornetq-memory9.png.
      Red mark in the pictures represents HornetQ server start time. Memory graph was created before OOM but you can see progress of occupied memory.

        1. hornetq-memory10.png
          hornetq-memory10.png
          99 kB
        2. hornetq-memory11.png
          hornetq-memory11.png
          82 kB
        3. hornetq-memory6.png
          hornetq-memory6.png
          98 kB
        4. hornetq-memory7.png
          hornetq-memory7.png
          87 kB
        5. hornetq-memory8.png
          hornetq-memory8.png
          73 kB
        6. hornetq-memory9.png
          hornetq-memory9.png
          96 kB

              csuconic@redhat.com Clebert Suconic
              pslavice@redhat.com Pavel Slavicek
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: