Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-1560

Messages Pinging Ponging in redistribution after some race on update consumers

    XMLWordPrintable

Details

    • Task
    • Resolution: Done
    • Blocker
    • None
    • AMQ 7.2.0.GA
    • None

    Description

      Scenario uses 3 cluster nodes with On demand message load balancing (but deployed are 6 cluster nodes).

      1. Make sure all nodes have same address & queue created.
      2. Subscribe receiver rA to broker2 and expect to receive 2 messages.
      3. Subscribe receiver rB to broker3 and expect to receive 4 messages.
      4. Send 8 messages to broker1.
      5. Alternating rA and rB recieves 2 and 4 messages as expected.
      6. Subscribe receiver rC to broker3 and expect to receive remaining 2 messages.
      7. Leftover 2 messages are in question.
        1. They are either correctly consumed by reciever rC from broker3.
        2. OR are ping-ponged between broker2 <-> broker3 (see crazy message stats with qpid c++)
        3. OR are wrongly broker2 instead of broker3

      Current results (and observations):
      Core JMS, Openwire JMS and AMQP JMS & Python clients are working as expected.

      AMQP Clients:
      Qpid C++ (legacy) - almost always passes, but reports wrongly message count statistics:
      0 messages in queue (on all nodes, ever-increasing number of added & acked messages).
      Seems like leftover messages are ping-ponging between broker2 and broker3, where consumers were connected.

      Qpid Proton C++ - always fails. Similar scenario like with Qpid C++, but it's 50% chance that leftover messages won't be consumed, due to messages traveling between broker2<->3.
      I believe this failure chance would increase with more consumers in more brokers (1/n chance).

      .NetLite -message redistribution does not work correctly. Messages should end up on broker3, while they end up in broker2.
      According to docs, messages should be at broker3, where the last active consumer was present.

      broker.xml relevant parts

      <cluster-user>ACTIVEMQ.CLUSTER.ADMIN.USER</cluster-user>
          <cluster-password>redhat</cluster-password>
          <broadcast-groups>
            <broadcast-group name="my-broadcast-group">
              <group-address>231.7.7.7</group-address>
              <group-port>9876</group-port>
              <broadcast-period>5000</broadcast-period>
              <connector-ref>artemis</connector-ref>
            </broadcast-group>
          </broadcast-groups>
          <discovery-groups>
            <discovery-group name="my-discovery-group">
              <group-address>231.7.7.7</group-address>
              <group-port>9876</group-port>
              <refresh-timeout>10000</refresh-timeout>
            </discovery-group>
          </discovery-groups>
          <cluster-connections>
            <cluster-connection name="my-cluster">
              <connector-ref>artemis</connector-ref>
              <message-load-balancing>ON_DEMAND</message-load-balancing>
              <max-hops>1</max-hops>
              <discovery-group-ref discovery-group-name="my-discovery-group"/>
            </cluster-connection>
          </cluster-connections>
          <security-settings>
            <security-setting match="#">
              <permission type="createNonDurableQueue" roles="amq"/>
              <permission type="deleteNonDurableQueue" roles="amq"/>
              <permission type="createDurableQueue" roles="amq"/>
              <permission type="deleteDurableQueue" roles="amq"/>
              <permission type="createAddress" roles="amq"/>
              <permission type="deleteAddress" roles="amq"/>
              <permission type="consume" roles="amq"/>
              <permission type="browse" roles="amq"/>
              <permission type="send" roles="amq"/>
              <!-- we need this otherwise ./artemis data imp wouldn't work -->
              <permission type="manage" roles="amq"/>
            </security-setting>
          </security-settings>
          <address-settings>
            <!-- if you define auto-create on certain queues, management has to be auto-create -->
            <address-setting match="activemq.management#">
              <dead-letter-address>DLQ</dead-letter-address>
              <expiry-address>ExpiryQueue</expiry-address>
              <redelivery-delay>0</redelivery-delay>
              <!-- with -1 only the global-max-size is in use for limiting -->
              <max-size-bytes>-1</max-size-bytes>
              <message-counter-history-day-limit>10</message-counter-history-day-limit>
              <address-full-policy>PAGE</address-full-policy>
              <auto-create-queues>true</auto-create-queues>
              <auto-create-addresses>true</auto-create-addresses>
              <auto-create-jms-queues>true</auto-create-jms-queues>
              <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <!--default for catch all-->
            <address-setting match="#">
              <dead-letter-address>DLQ</dead-letter-address>
              <expiry-address>ExpiryQueue</expiry-address>
              <redelivery-delay>0</redelivery-delay>
              <!-- with -1 only the global-max-size is in use for limiting -->
              <max-size-bytes>-1</max-size-bytes>
              <message-counter-history-day-limit>10</message-counter-history-day-limit>
              <address-full-policy>PAGE</address-full-policy>
              <auto-create-queues>true</auto-create-queues>
              <auto-create-addresses>true</auto-create-addresses>
              <auto-create-jms-queues>true</auto-create-jms-queues>
              <auto-create-jms-topics>true</auto-create-jms-topics>
              <redistribution-delay>0</redistribution-delay>
            </address-setting>
          </address-settings>
      
      

      Attachments

        Issue Links

          Activity

            People

              mtaylor1@redhat.com Martyn Taylor (Inactive)
              mtoth@redhat.com Michal Toth
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - 1 day
                  1d
                  Remaining:
                  Remaining Estimate - 1 day
                  1d
                  Logged:
                  Time Spent - Not Specified
                  Not Specified