Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-6798

Autocreated multicast queue in broker cluster instance is not replicated to other broker cluster instances

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Major Major
    • None
    • AMQ 7.10.0.GA
    • broker-core
    • False
    • None
    • False
    • Hide

      I then perform the following actions:
      Please find attached the spring boot publisher and subscriber demo application.

      Steps:
      0. Start the broker cluster.
      1. Start the producer application.
      2. Let the producer published a few messages and then start the subscriber application.
      3. Stop the subscriber application[amqp-demo-subscriber].
      4. Stop the first broker in the connection list causing the publisher application to failover to the second broker in the list.
      5. Stop the publisher application (after some messages have been published to the second broker).
      6. Start the subscriber application. (which connects to the second broker in the list, because the first one is still down).
      7. Start the first broker in the connection list.

       

      Now messages that were produced after the subscriber was stopped at step 3 are lost.

       

      The cause of this issue seems to be that the queue in the multicast address that is automatically created in the first broker when starting the subscriber application isn't automatically replicated to the second broker. The second broker, therefore, isn't aware of the subscription when the publisher application starts publishing to this address (after the first broker has been stopped) and the published messages vanish.

      Is it possible to update the configuration of the broker cluster such that the auto-created multicast address with its (subscription) queues are automatically replicated to the other broker instances in the cluster?

      Show
      I then perform the following actions: Please find attached the spring boot publisher and subscriber demo application. Steps: 0. Start the broker cluster. 1. Start the producer application. 2. Let the producer published a few messages and then start the subscriber application. 3. Stop the subscriber application [amqp-demo-subscriber] . 4. Stop the first broker in the connection list causing the publisher application to failover to the second broker in the list. 5. Stop the publisher application (after some messages have been published to the second broker). 6. Start the subscriber application. (which connects to the second broker in the list, because the first one is still down). 7. Start the first broker in the connection list.   Now messages that were produced after the subscriber was stopped at step 3 are lost.   The cause of this issue seems to be that the queue in the multicast address that is automatically created in the first broker when starting the subscriber application isn't automatically replicated to the second broker. The second broker, therefore, isn't aware of the subscription when the publisher application starts publishing to this address (after the first broker has been stopped) and the published messages vanish. Is it possible to update the configuration of the broker cluster such that the auto-created multicast address with its (subscription) queues are automatically replicated to the other broker instances in the cluster?

      I have a publisher and subscriber application (both Red Hat Fuse) that publish messages to a multicast address and consume messages from this address (using AMQP via the Qpid JMS library and connection pooling). 

      The AMQ Broker is a cluster of 2 instances and the applications connect to this cluster using a failover connection string. 

       

        1. Reproducer.tar.xz
          7 kB
        2. amqp-demo2 (1).zip
          24 kB
        3. Reproducer1.tar.xz
          8 kB
        4. Reproducer_Final.tar.xz
          26.26 MB

              rhn-support-jbertram Justin Bertram
              rhn-support-ychopada Yashashree Chopada
              Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: