Uploaded image for project: 'JBoss A-MQ'
  1. JBoss A-MQ
  2. ENTMQ-1182

When one of the ensemble servers stops (1 out of 3), it triggers a change to the broker masterslave status.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • JBoss A-MQ 6.2.1
    • JBoss A-MQ 6.2, 6.1.1
    • fabric8
    • None

      we use fabric AMQ and with JDBC persistence store. They face something similar with https://issues.jboss.org/browse/ENTMQ-869 except they are not using leveldb.In short, they have 3 ensemble servers (root_node1,2,3) and 2 mq server with master/slave mq-profile (mq1 under rootnode1 and mq2 under rootnode2) when they stop root node3, they see the mq server master (mq1) is gone (doesn't show under "cluster-list fusemq"), and mq2 becomes the master. mq1 never comes back but container-list shows it is up and running).I have a few questions:
      1. If they are using mq-profile, kind = MasterSlave and with JDBC persistence store and using lease DB Locker in their activemq.xml. Is the zookeeper controlling the master / slave lock or the JDBC lease DB locker? Looks like Zookeeper expects the broker name to be the same but inconsistent with lease DB locker. However, we see this issues being on either Database locker or using the lease DB locker.
      2. Why would the change of zookeeper triggers the status of the Broker? Looking at ENTMQ-869, it seems like the levelDB will need to connect to the zookeeper, but in their JDBC setup it doesn't, and do they need to use a zkclient to monitor to status of the zookeeper for the lease locker? How does it work behind the scene?
      3. If it is not zookeeper's job to keep the lock but the JDBC lease DB locker, they will need to have different broker name or lease lock holder id, right? Which means they need different profile for mq1 and mq2. How should they have it setup?

              gtully@redhat.com Gary Tully
              rhn-support-whui Roger Hui
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: