Uploaded image for project: 'Red Hat Fuse'
  1. Red Hat Fuse
  2. ENTESB-4055

Improve Hawt.io Performance

XMLWordPrintable

    • % %
    • 6.3 Sprint 4 (Mar 28 - Apr 29)

      This is the engineering counterpart of JBFPL-244, which was accepted after review.

      When the Hawtio console is used to monitor ActiveMQ operation, in situations where there are thousands of queues, with many consumers each, the whole of the Hawtio user interface behaves very badly. At best, it will be very slow. In other situations there will be "slow script" warnings from the browser, or just internal timeouts that lead to the displayed data being incomplete.

      In Fuse 6.1 the problem was better defined, because the Jolokia "maximum collection size" setting was fixed at 500. This was nowhere near large enough to handle the thousand-queue situation, so many queues were simply omitted from the display. This was unsatisfactory, but at least the rest of the Hawtio interface worked properly. In 6.1 R2 P6 the maximum collection size was increased to a huge value (see ENTESB-1641), with the result that the display was correct in situations where there were perhaps a couple of hundred queues, with a couple of consumers each. In situations with thousands of queues and many consumers per queue, this large collection size is a liability, because the huge amount of data involved will lead to script warnings, time-outs, and generally a slow user experience.

      In Fuse 6.2 the Jolokia maximum collection size is configurable, but that still leaves customers with an uncomfortable choice between truncating their data, or having a slow, buggy experience.

      While it's clear that displaying a larger amount of JMS-related data requires a larger amount of network traffic and more intensive script operation, we don't experience the same problem using JConsole to monitor A-MQ. So the problem should not be impossible to solve, even though it might require a significant re-working of the Hawtio-Jolokia-ActiveMQ interaction.

      This problem can be reproduced using a simple Java program that creates thousands of queues with, say, ten consumers each. The degree of badness of the problem will depend on the browser used, and how fast the browser workstation is.

      I'm told that this isn't actually a bug. It seems that we cann't handle thousands of queues without passing a huge amount of data, which would necessarily be slow. However, I think we could, in principle, rethink the way the interface worked, so that such large data volumes weren't needed.

              ggrzybek Grzegorz Grzybek
              rhn-support-kboone Kevin Boone
              Pavel Macik Pavel Macik
              Votes:
              0 Vote for this issue
              Watchers:
              18 Start watching this issue

                Created:
                Updated:
                Resolved: