-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
None
-
None
-
False
-
None
-
False
-
-
There seems to be extra messages when performing message migration during scale-down scenario (4 -> 2 pods) when Federation/mirroring (one way) is in place.
If we scale down main brokers - it's working as expected.
The issue starts with mirroring of those messages to backups (duplicates #1).
Second duplication happens on scaledown of backup pods - msg migration happens only on backups, so 2nd duplication of messages might happen when main brokers go down and messages are read from backups.
Is there anything we can help users to avoid double duplication of messages in this scenarios or it's perfectly normal and expected? Maybe we could disable message migration on mirrors by default? Cause it makes no sense imo, to migrate mirrored messages.
spec: acceptors: - expose: true name: all-acceptor port: 61616 protocols: all - expose: true name: amqp-acceptor port: 5672 protocols: amqp adminPassword: adminPass adminUser: admin brokerProperties: - maxDiskUsage=85 - clusterConfigurations.my-cluster.producerWindowSize=-1 - addressSettings.#.redeliveryMultiplier=5 - criticalAnalyzer=true - criticalAnalyzerTimeout=6000 - criticalAnalyzerCheckPeriod=-1 - criticalAnalyzerPolicy=LOG - AMQPConnections.dr.uri=tcp://dr-broker-all-acceptor-${STATEFUL_SET_ORDINAL}-svc.mirror-dr-tests.svc.cluster.local:61616 - AMQPConnections.dr.retryInterval=5000 - AMQPConnections.dr.user=admin - AMQPConnections.dr.password=adminPass - AMQPConnections.dr.connectionElements.mirror.type=MIRROR - AMQPConnections.dr.connectionElements.mirror.messageAcknowledgements=true - AMQPConnections.dr.connectionElements.mirror.queueCreation=true - AMQPConnections.dr.connectionElements.mirror.queueRemoval=true - addressConfigurations.queuea.queueConfigs.queuea.address=queuea - addressConfigurations.queuea.queueConfigs.queuea.routingType=ANYCAST - addressSettings.queuea.configDeleteAddresses=FORCE - addressSettings.queuea.configDeleteQueues=FORCE - addressConfigurations.queueb.queueConfigs.queueb.address=queueb - addressConfigurations.queueb.queueConfigs.queueb.routingType=ANYCAST - addressSettings.queueb.configDeleteAddresses=FORCE - addressSettings.queueb.configDeleteQueues=FORCE console: expose: true deploymentPlan: clustered: true enableMetricsPlugin: true extraMounts: secrets: - artemis-secret-logging-config jolokiaAgentEnabled: true journalType: aio managementRBACEnabled: true messageMigration: true persistenceEnabled: true requireLogin: true size: 4
Before scale-down scenario (4 prod/dr brokers)
******************************************************************************************************************************* >>> Queue stats on node 424cec64-5df6-11ef-a95a-0a580a81023b, url=tcp://prod-broker-ss-0:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 6c1bbdd5-5df6-11ef-8900-0a580a81023d, url=tcp://prod-broker-ss-3.prod-broker-hdls-svc.mirror-prod-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 52dc4512-5df6-11ef-a15f-0a580a800223, url=tcp://prod-broker-ss-1.prod-broker-hdls-svc.mirror-prod-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 5f96f06c-5df6-11ef-a858-0a580a81023c, url=tcp://prod-broker-ss-2.prod-broker-hdls-svc.mirror-prod-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | 2024-08-19T06:46:34.573Z DEBUG [default][ExecutorOperator:79] [mirror-dr-tests] dr-broker-ss-0 Running command: ./amq-broker/bin/artemis queue stat --user=admin --password=adminPass --maxRows=1000 --queueName=queue --maxColumnSize=-1 --url=tcp://dr-broker-ss-0:61616 --clustered 2024-08-19T06:46:37.726Z DEBUG [default][BundledArtemisClient:85] NOTE: Picked up JDK_JAVA_OPTIONS: -Dbroker.properties=/amq/extra/secrets/dr-broker-props/broker.properties Connection brokerURL = tcp://dr-broker-ss-0:61616 2024-08-19 06:46:30,500 DEBUG [org.apache.activemq.artemis.utils.UUIDGenerator] using hardware address a:58:a:ffffff81:2:58 2024-08-19 06:46:30,834 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::getNodeID::[] 2024-08-19 06:46:30,850 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: 35d8c07f-5df6-11ef-97a0-0a580a81023a 2024-08-19 06:46:30,861 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::listNetworkTopology::[] 2024-08-19 06:46:30,866 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: [{"nodeID":"35d8c07f-5df6-11ef-97a0-0a580a81023a","live":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"},{"nodeID":"9099a178-5df6-11ef-9fc9-0a580a810240","live":"dr-broker-ss-3.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-3.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"},{"nodeID":"78182168-5df6-11ef-aa13-0a580a81023e","live":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"},{"nodeID":"84a10e8f-5df6-11ef-aedd-0a580a81023f","live":"dr-broker-ss-2.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-2.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"}] ******************************************************************************************************************************* >>> Queue stats on node 35d8c07f-5df6-11ef-97a0-0a580a81023a, url=tcp://dr-broker-ss-0:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 9099a178-5df6-11ef-9fc9-0a580a810240, url=tcp://dr-broker-ss-3.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 78182168-5df6-11ef-aa13-0a580a81023e, url=tcp://dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 84a10e8f-5df6-11ef-aedd-0a580a81023f, url=tcp://dr-broker-ss-2.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false |
After scaledown of prod 4 -> 2 (8+4 msgs)
******************************************************************************************************************************* >>> Queue stats on node 473bab17-5df8-11ef-a427-0a580a810247, url=tcp://prod-broker-ss-0:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 6 | 6 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 3 | 3 | 0 | 0 | 0 |ANYCAST| false | ******************************************************************************************************************************* >>> Queue stats on node 5aefc4ad-5df8-11ef-8dd0-0a580a810248, url=tcp://prod-broker-ss-1.prod-broker-hdls-svc.mirror-prod-tests.svc.cluster.local:61616 |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 2 | 2 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 1 | 1 | 0 | 0 | 0 |ANYCAST| false | *******************************************************************************************************************************
After scaledown of dr 4 -> 2 (12+6 msgs)
^Csh-4.4$ amq-broker/bin/artemis queue stat --user=admin --password=adminPass --maxRows=1000 --queueName=queue --maxColumnSize=-1 --url=tcp://dr-brok-ss-1:61616 NOTE: Picked up JDK_JAVA_OPTIONS: -Dbroker.properties=/amq/extra/secrets/dr-broker-props/broker.properties Connection brokerURL = tcp://dr-broker-ss-1:61616 2024-08-19 07:06:13,000 DEBUG [org.apache.activemq.artemis.utils.UUIDGenerator] using hardware address a:58:a:ffffff80:2:36 2024-08-19 07:06:13,283 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::getNodeID::[] 2024-08-19 07:06:13,295 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: 81ae9745-5df8-11ef-98d2-0a580a800224 2024-08-19 07:06:13,302 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::listNetworkTopology::[] 2024-08-19 07:06:13,307 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: [{"nodeID":"8b621c8e-5df8-11ef-94eb-0a580a81024b","live":"dr-broker-ss-2:61616","primary":"dr-broker-ss-2:61616"},{"nodeID":"3aaacf73-5df8-11ef-bc87-0a580a810246","live":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"},{"nodeID":"963060c1-5df8-11ef-b231-0a580a81024c","live":"dr-broker-ss-3:61616","primary":"dr-broker-ss-3:61616"},{"nodeID":"81ae9745-5df8-11ef-98d2-0a580a800224","live":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"}] |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 6 | 6 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 3 | 3 | 0 | 0 | 0 |ANYCAST| false | sh-4.4$ amq-broker/bin/artemis queue stat --user=admin --password=adminPass --maxRows=1000 --queueName=queue --maxColumnSize=-1 --url=tcp://dr-broker-ss-0:61616 NOTE: Picked up JDK_JAVA_OPTIONS: -Dbroker.properties=/amq/extra/secrets/dr-broker-props/broker.properties Connection brokerURL = tcp://dr-broker-ss-0:61616 2024-08-19 07:05:48,869 DEBUG [org.apache.activemq.artemis.utils.UUIDGenerator] using hardware address a:58:a:ffffff81:2:70 2024-08-19 07:05:49,189 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::getNodeID::[] 2024-08-19 07:05:49,201 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: 3aaacf73-5df8-11ef-bc87-0a580a810246 2024-08-19 07:05:49,207 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] Setting up call broker::listNetworkTopology::[] 2024-08-19 07:05:49,210 DEBUG [org.apache.activemq.artemis.api.core.management.SimpleManagement] management result:: [{"nodeID":"8b621c8e-5df8-11ef-94eb-0a580a81024b","live":"dr-broker-ss-2:61616","primary":"dr-broker-ss-2:61616"},{"nodeID":"3aaacf73-5df8-11ef-bc87-0a580a810246","live":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-0.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"},{"nodeID":"963060c1-5df8-11ef-b231-0a580a81024c","live":"dr-broker-ss-3:61616","primary":"dr-broker-ss-3:61616"},{"nodeID":"81ae9745-5df8-11ef-98d2-0a580a800224","live":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616","primary":"dr-broker-ss-1.dr-broker-hdls-svc.mirror-dr-tests.svc.cluster.local:61616"}] |NAME |ADDRESS|CONSUMER|MESSAGE|MESSAGES|DELIVERING|MESSAGES|SCHEDULED|ROUTING|INTERNAL| | | | COUNT | COUNT | ADDED | COUNT | ACKED | COUNT | TYPE | | |queuea|queuea | 0 | 6 | 6 | 0 | 0 | 0 |ANYCAST| false | |queueb|queueb | 0 | 3 | 3 | 0 | 0 | 0 |ANYCAST| false |
Also, seems like artemis queue stat --clustered is broken after scale down in this scenario. I will open up another jira.
- depends on
-
ENTMQBR-6414 Drainer task should be run as a job and not as unmanaged pod
- To Do
- is related to
-
ENTMQBR-9066 [QE] Implement federation/mirroring tests on openshift
- Refinement
- relates to
-
ENTMQBR-9355 [Scale down] artemis queue stat clustered is broken after scale-down scenario
- Backlog