Uploaded image for project: 'AMQ Interconnect'
  1. AMQ Interconnect
  2. ENTMQIC-1992

Enabling AMQ 7.1 broker failback breaks failover support

    XMLWordPrintable

Details

    • Bug
    • Resolution: Done
    • Major
    • 1.2.0.CR2
    • 1.1.0.GA
    • None
    • Release Notes
    • +
    • Hide
      Previously, if a router was connected to the master instance of a broker cluster, and then failed over to the backup instance, it was possible for the router to lose track of the master instance and fail to reconnect to it once it was back online. This issue has been fixed to ensure that a router can always reconnect to the original master instance in a broker cluster.
      Show
      Previously, if a router was connected to the master instance of a broker cluster, and then failed over to the backup instance, it was possible for the router to lose track of the master instance and fail to reconnect to it once it was back online. This issue has been fixed to ensure that a router can always reconnect to the original master instance in a broker cluster.
    • Documented as Resolved Issue
    • Hide

      See attached configuration.

      • Start broker-master
      • Once broker-master is up start broker-slave
      • Start the router
      • Test sending and receiving and failoverList
      • Stop broker-master, backup slave should become live
      • Test sending and receiving and failoverList
      • Restart broker-master and allow it to become live and slave backup once again
      • Test sending and receiving and failoverList
      Show
      See attached configuration. Start broker-master Once broker-master is up start broker-slave Start the router Test sending and receiving and failoverList Stop broker-master, backup slave should become live Test sending and receiving and failoverList Restart broker-master and allow it to become live and slave backup once again Test sending and receiving and failoverList
    • Interconnect - June Sprint

    Description

      This may be a broker related problem, however from the end developer's perspective...

      Upon configuring two brokers for failover and a single router as per ENTMQIC-1975 I see that the router correctly fails over to the slave and sending messages is successful both before and after the single failover.
      Initial failover data:

      [0x2534010]:0 <- @open(16) [container-id="broker-master", max-frame-size=4294967295, channel-max=65535, idle-time-out=30000, offered-capabilities=@PN_SYMBOL[:"sole-connection-for-container", :"DELAYED_DELIVERY", :"SHARED-SUBS", :"ANONYMOUS
      -RELAY"], properties={:product="apache-activemq-artemis", :"failover-server-list"=[{:hostname="192.168.2.208", :scheme="amqp", :port=62616, :"network-host"="192.168.2.208"}], :version="2.4.0.amq-710004-redhat-1"}]
      

      An an initial qdmanage query of the failoverList

      [rkieley@ic1rh configuration]$ qdmanage -b amqp://192.168.2.208:5672 --type=connector query failoverList                                                                                                                                       
      [
        {
          "failoverList": "amqp://192.168.2.208:5672"
        }
      ]
      

      Both prior to failover and after I can send and receive messages via the live broker successfully.

      However when failover occurs I do not see any failover related information being received by the router and a subsequent check of the failoverList reveals:

      [rkieley@ic1rh configuration]$ qdmanage -b amqp://192.168.2.208:5672 --type=connector query failoverList
      []
      [rkieley@ic1rh configuration]$
      

      Once failback occurs I can no longer successfully send messages via the waypointed address.

      Attachments

        Issue Links

          Activity

            People

              gmurthy@redhat.com Ganesh Murthy
              rhn-support-rkieley Roderick Kieley
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: