Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-9126

[7.11] AMQ 7.11.3 will not restart after shutdown, with JDBC persistence and large message counts

XMLWordPrintable

      The broker uses JDBC persistence with Microsoft SQLServer as the back-end.

      Customer uses 'artemis perf producer' to send 2M messages of size 1kB.

      Then the broker is shut down and restarted. With normal log level, nothing appears to happen until after, about twenty minutes, the broker runs out of heap.

      With TRACE log level, we see about 7M messages of this form:

      2023-11-27 12:26:44,957 TRACE [org.apache.activemq.artemis.jdbc.store.drivers.JDBCConnectionProvider] org.apache.commons.dbcp2.DelegatingResultSet@2e51b95.getString(1) = 000002597.page
      

      The same "NNNN.page" identifier is repeated many times in the TRACE log because, after a few thousand increments of the `.page` number, there's an exception and the whole sequence starts again.

      2023-11-27 12:26:44,623 TRACE [org.apache.activemq.artemis.core.paging.impl.Page] reading page 2166 on address = z_perf_test_queue_pp0 onlyLargeMessages = false
      java.lang.Exception: trace
              at org.apache.activemq.artemis.core.paging.impl.Page.read(Page.java:154) ~[artemis-server-2.28.0.redhat-00011.jar:2.28.0.redhat-00011] 
              at org.apache.activemq.artemis.core.paging.impl.Page.read(Page.java:141) ~[artemis-server-2.28.0.redhat-00011.jar:2.28.0.redhat-00011]
              at org.apache.activemq.artemis.core.paging.cursor.impl.PageCounterRebuildManager.rebuild(PageCounterRebuildManager.java:223) ~[artemis-server-2.28.0.redhat-00011.jar:2.28.0.redhat-00011]
              at org.apache.activemq.artemis.core.paging.cursor.impl.PageCounterRebuildManager.run(PageCounterRebuildManager.java:191) ~[artemis-server-2.28.0.redhat-00011.jar:2.28.0.redhat-00011]
              at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57) ~[artemis-commons-2.28.0.redhat-00011.jar:?]
              at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32) ~[artemis-commons-2.28.0.redhat-00011.jar:?]
              at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68) ~[artemis-commons-2.28.0.redhat-00011.jar:?]
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
       

      The broker clearly pages whilst the messages are produced and, at the end of production, there are many page store tables with many rows each.

      TableName	RowCounts	Used_MB	Unused_MB	Total_MB
      "BINDINGS"	86	0.06	0.15	0.21
      "LARGE_MESSAGES"	0	0.00	0.00	0.00
      "MESSAGE"	2838452	2843.55	0.43	2843.98
      "NODE_MANAGER"	4	0.02	0.05	0.07
      "PAGE_STORE"	1	0.03	0.11	0.14
      "PAGE_STORE1436785"	6679	64.50	0.09	64.59
      "PAGE_STORE1436787"	6645	64.13	0.08	64.21
      "PAGE_STORE1436788"	6685	64.44	0.09	64.52
      "PAGE_STORE1436789"	6615	63.80	0.09	63.90
      "PAGE_STORE1436790"	6654	64.16	0.11	64.27
      "PAGE_STORE1529286"	4283	41.34	0.12	41.46
      "PAGE_STORE1529311"	4302	41.52	0.13	41.65
      "PAGE_STORE1529322"	4309	41.67	0.10	41.77
      "PAGE_STORE1529333"	4313	41.71	0.06	41.77
      "PAGE_STORE1529390"	4306	41.63	0.08	41.71
      "PAGE_STORE1828788"	11039	105.86	0.04	105.90
      "PAGE_STORE1828816"	11073	106.20	0.08	106.27
      "PAGE_STORE1828826"	11045	105.83	0.07	105.90
      "PAGE_STORE1828838"	11006	105.48	0.05	105.52
      "PAGE_STORE1828886"	11057	105.83	0.07	105.90 

      No changes have been made in the broker to any memory-related setting. Heap size is about 3Gb.

      The heap dump is full of SQL-related artefacts, some of which seem to be from the broker, and some from the JDBC driver.

      It looks rather as if the broker fails repeatedly to read all the data back from the paging tables but, each time it tries again, it fails to release memory associated with the previous attempt.

              csuconic@redhat.com Clebert Suconic
              rh-messaging-ci Messaging CI
              Michal Toth Michal Toth
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: