Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-3875

Inconsistent and negative address size

XMLWordPrintable

    • False
    • False
    • Undefined
    • Verified in a release
    • Hide

      Cause-Consequence-Fix-Result ?

      Messages are reference counted in Artemis and the first binding to cause a message to get stored increments the reference count and tracks the message size on its pageStore and on the address message counters.

      This is problematic when there are durable subscribers or cluster connections and the order of acknowledgement can be different than the dispatch order. If the binding and page store that tracked the message size on first use is not the last binding to acknowledge the message some other binding and page store gets to discount the message size in error.

      The consequence was pageStores reporting negative message size bytes usage and potentially delaying the onset of subsequent paging.

      The fix is to always use the page store corresponding to the target address, not the binding, when accounting for the message size increment and decrement of the page store counters. Each subsequent reference increment would just track 64bytes for a java message reference.

      One implication that can be counter intuitive, in a case where there is only a store and forward queue from a cluster connection on a broker, paging will only kick in when the global-max-size bytes is reached, because it is only the global-max-size that can combine the message size on the paging store with the references on the store and forward queue binding.

      If only a individual page store max-size-bytes is configured for the target address, it won't ever be triggered b/c the messages are only stored on the store and forward queue in this scenario.

      Show
      Cause-Consequence-Fix-Result ? Messages are reference counted in Artemis and the first binding to cause a message to get stored increments the reference count and tracks the message size on its pageStore and on the address message counters. This is problematic when there are durable subscribers or cluster connections and the order of acknowledgement can be different than the dispatch order. If the binding and page store that tracked the message size on first use is not the last binding to acknowledge the message some other binding and page store gets to discount the message size in error. The consequence was pageStores reporting negative message size bytes usage and potentially delaying the onset of subsequent paging. The fix is to always use the page store corresponding to the target address, not the binding, when accounting for the message size increment and decrement of the page store counters. Each subsequent reference increment would just track 64bytes for a java message reference. One implication that can be counter intuitive, in a case where there is only a store and forward queue from a cluster connection on a broker, paging will only kick in when the global-max-size bytes is reached, because it is only the global-max-size that can combine the message size on the paging store with the references on the store and forward queue binding. If only a individual page store max-size-bytes is configured for the target address, it won't ever be triggered b/c the messages are only stored on the store and forward queue in this scenario.
    • Hide

      Set up a fully-connected mesh with two brokers (they can be on the same host). I will refer to the brokers as Broker 1 and Broker 2.

      Establish a durable consumer on address t1, on Broker 1. using AMQP

      Establish a durable consumer on address t1, on Broker 2, with a different client ID, using AMQP

      Publish messages to address t1, on either broker, using AMQP. Messages will be received by both subscribers, as expected, but the warning message will be shown on the broker that first received the message.

       

                             +-----------+
                             |           |
                             |  Broker 1 |
      Publisher on t1 -----> |  Topic t1 |-------> Durable subscriber on t1, client ID 'a'
                             |           |
                             +-----------+
                                 ^   |
                                 |   |
                                 |   v
                             +-----------+
                             |           |
                             |  Broker 2 |
      Publisher on t1 ---->  |  Topic t1 |-------> Durable subscriber on t1, client ID 'b'
                             |           |
                             +-----------+ 

       

      Show
      Set up a fully-connected mesh with two brokers (they can be on the same host). I will refer to the brokers as Broker 1 and Broker 2. Establish a durable consumer on address t1, on Broker 1. using AMQP Establish a durable consumer on address t1, on Broker 2, with a different client ID, using AMQP Publish messages to address t1, on either broker, using AMQP. Messages will be received by both subscribers, as expected, but the warning message will be shown on the broker that first received the message.   +-----------+ | | | Broker 1 | Publisher on t1 -----> | Topic t1 |-------> Durable subscriber on t1, client ID 'a' | | +-----------+ ^ | | | | v +-----------+ | | | Broker 2 | Publisher on t1 ----> | Topic t1 |-------> Durable subscriber on t1, client ID 'b' | | +-----------+  

      In circumstances that are not yet entirely clear, a large number of messages appear in the broker log with the following form:

      ~~~

      2020-05-17 09:24:34,826 WARN [org.apache.activemq.artemis.core.server] AMQ222214: Destination $.artemis.internal.sf.XXX has an inconsistent and negative address size

      ~~~

      The general form and distribution of the warnings seems similar to those reported in the upstream bug ARTEMIS-2768. However, the customer is adamant that no wild-card consumers are in use.

      So far as I can tell, the problem has no adverse effects beyond the irritating error message, which can occur every time a message is accepted.

       

       

        1. artemis.log
          33.10 MB
          Kevin Boone

              gtully@redhat.com Gary Tully
              rhn-support-kboone Kevin Boone
              Oleg Sushchenko Oleg Sushchenko
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: