Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-498

slow consumer: consumer is killed even though it receives at a rate higher than the threshold

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • A-MQ 7.0.0.ER16
    • A-MQ 7.0.0.ER14
    • None
    • Release Notes
    • Hide
      In a previous Beta release, brokers using the AMQP `slow-consumer-*` parameters could incorrectly close consumers that met the configured thresholds after the first scan interval.

      The issue has been resolved.
      Show
      In a previous Beta release, brokers using the AMQP `slow-consumer-*` parameters could incorrectly close consumers that met the configured thresholds after the first scan interval. The issue has been resolved.
    • Documented as Resolved Issue
    • Hide

      In broker.xml

         </security-settings>
      
      	  <address-settings>
      		 <!--default for catch all-->
      		 <address-setting match="#">
      
      ...
      
      	  <slow-consumer-threshold>1</slow-consumer-threshold>
      	  <slow-consumer-policy>KILL</slow-consumer-policy>
      	  <slow-consumer-check-period>5</slow-consumer-check-period>
      
      		 </address-setting>
      	  </address-settings>
      
      # (1) fill queue
      ./aac5_sender.py -b 127.0.0.1:5672/jms.queue.myqueue --log-msgs dict -c 2000
      
      # (2) add messages at a rate 2 msgs/s
      ./aac5_sender.py -b 127.0.0.1:5672/jms.queue.myqueue --log-msgs dict -c 200 --duration 100
      
      # (3) remove messages at a rate 4 msgs/s; run concurrently with (2)
      ./aac5_receiver.py -b 127.0.0.1:5672/jms.queue.myqueue --reactor-prefetch 2 -c 200 --duration 50 --log-msgs dict --conn-reconnect none
      
      Show
      In broker.xml </security-settings> <address-settings> <!--default for catch all--> <address-setting match="#"> ... <slow-consumer-threshold>1</slow-consumer-threshold> <slow-consumer-policy>KILL</slow-consumer-policy> <slow-consumer-check-period>5</slow-consumer-check-period> </address-setting> </address-settings> # (1) fill queue ./aac5_sender.py -b 127.0.0.1:5672/jms.queue.myqueue --log-msgs dict -c 2000 # (2) add messages at a rate 2 msgs/s ./aac5_sender.py -b 127.0.0.1:5672/jms.queue.myqueue --log-msgs dict -c 200 --duration 100 # (3) remove messages at a rate 4 msgs/s; run concurrently with (2) ./aac5_receiver.py -b 127.0.0.1:5672/jms.queue.myqueue --reactor-prefetch 2 -c 200 --duration 50 --log-msgs dict --conn-reconnect none

      The receiver in (3) is killed by the broker, even though it should not be.

      Broker log around the time of kill:

      14:40:17,068 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=143, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=142, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,068 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=141, last=141, settled=true, state=Accepted{}, batchable=false}
      14:40:17,170 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=144, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=143, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,170 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=142, last=142, settled=true, state=Accepted{}, batchable=false}
      14:40:17,172 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=145, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=144, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,172 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=143, last=143, settled=true, state=Accepted{}, batchable=false}
      14:40:17,274 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=146, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=145, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,274 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=144, last=144, settled=true, state=Accepted{}, batchable=false}
      14:40:17,275 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=147, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=146, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,275 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=145, last=145, settled=true, state=Accepted{}, batchable=false}
      14:40:17,277 FINE  [proton.trace] IN: CH[0] : Flow{nextIncomingId=148, incomingWindow=2147483647, nextOutgoingId=0, outgoingWindow=2147483647, handle=0, deliveryCount=147, linkCredit=1, available=null, drain=false, echo=false, properties=null}
      14:40:17,277 FINE  [proton.trace] IN: CH[0] : Disposition{role=RECEIVER, first=146, last=146, settled=true, state=Accepted{}, batchable=false}
      14:40:17,325 INFO  [org.apache.activemq.artemis.core.server] AMQ221048: Consumer fad43b15-d0f0-11e6-93d0-0a0027000001:0 attached to queue 'jms.queue.myqueue' from /127.0.0.1:52312 identified as 'slow.' Expected consumption rate: 1 msgs/second; actual consumption rate: 0 msgs/second.
      14:40:17,326 DEBUG [org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl] RemotingServiceImpl::removing connection ID -1457113702
      14:40:17,330 WARN  [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119005: connections for /127.0.0.1:52312 closed by management [code=INTERNAL_ERROR]
      14:40:17,331 WARN  [org.apache.activemq.artemis.core.server] AMQ222061: Client connection failed, clearing up resources for session fad43b15-d0f0-11e6-93d0-0a0027000001
      14:40:17,332 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't find any bindings for address=activemq.notifications on message=ServerMessage[messageID=34555,durable=true,userID=null,priority=0, bodySize=512, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=jms.queue.myqueue,_AMQ_Distance=0,_AMQ_ConsumerCount=0,_AMQ_User=NULL-value,_AMQ_SessionName=fad43b15-d0f0-11e6-93d0-0a0027000001,_AMQ_Address=jms.queue.myqueue,_AMQ_RemoteAddress=/127.0.0.1:52312,_AMQ_NotifType=CONSUMER_CLOSED,_AMQ_FilterString=NULL-value,_AMQ_NotifTimestamp=1483364417332,_AMQ_ClusterName=jms.queue.myqueuea7fe10bb-c708-11e6-a99a-0a0027000001]]@1675926105
      14:40:17,333 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message ServerMessage[messageID=34555,durable=true,userID=null,priority=0, bodySize=512, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=jms.queue.myqueue,_AMQ_Distance=0,_AMQ_ConsumerCount=0,_AMQ_User=NULL-value,_AMQ_SessionName=fad43b15-d0f0-11e6-93d0-0a0027000001,_AMQ_Address=jms.queue.myqueue,_AMQ_RemoteAddress=/127.0.0.1:52312,_AMQ_NotifType=CONSUMER_CLOSED,_AMQ_FilterString=NULL-value,_AMQ_NotifTimestamp=1483364417332,_AMQ_ClusterName=jms.queue.myqueuea7fe10bb-c708-11e6-a99a-0a0027000001]]@1675926105 is not going anywhere as it didn't have a binding on address:activemq.notifications
      14:40:17,333 WARN  [org.apache.activemq.artemis.core.server] AMQ222107: Cleared up resources for session fad43b15-d0f0-11e6-93d0-0a0027000001
      14:40:17,342 FINE  [proton.trace] IN: CH[0] : Transfer{handle=0, deliveryId=13, deliveryTag=14, messageFormat=0, settled=false, more=false, rcvSettleMode=null, state=null, resume=false, aborted=false, batchable=false}[\x00Sp\xd0\x00\x00\x00\x0b\x00\x00\x00\x05BP\x04@BR\x00\x00Ss\xd0\x00\x00\x00-\x00\x00\x00\x0d@@@@@@\xa3\x0atext/plain@\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@R\x00@\x00St\xd1\x00\x00\x00\x04\x00\x00\x00\x00\x00Sw\xa1\x04None]
      14:40:17,842 FINE  [proton.trace] IN: CH[0] : Transfer{handle=0, deliveryId=14, deliveryTag=15, messageFormat=0, settled=false, more=false, rcvSettleMode=null, state=null, resume=false, aborted=false, batchable=false}[\x00Sp\xd0\x00\x00\x00\x0b\x00\x00\x00\x05BP\x04@BR\x00\x00Ss\xd0\x00\x00\x00-\x00\x00\x00\x0d@@@@@@\xa3\x0atext/plain@\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@R\x00@\x00St\xd1\x00\x00\x00\x04\x00\x00\x00\x00\x00Sw\xa1\x04None]
      14:40:18,343 FINE  [proton.trace] IN: CH[0] : Transfer{handle=0, deliveryId=15, deliveryTag=16, messageFormat=0, settled=false, more=false, rcvSettleMode=null, state=null, resume=false, aborted=false, batchable=false}[\x00Sp\xd0\x00\x00\x00\x0b\x00\x00\x00\x05BP\x04@BR\x00\x00Ss\xd0\x00\x00\x00-\x00\x00\x00\x0d@@@@@@\xa3\x0atext/plain@\x83\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00@R\x00@\x00St\xd1\x00\x00\x00\x04\x00\x00\x00\x00\x00Sw\xa1\x04None]
      

              rh-ee-ataylor Andy Taylor
              jdanek@redhat.com Jiri Daněk
              Jiri Daněk Jiri Daněk
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: