Unpack the Artemis bundle. cd to it. Then: $ JAVA_HOME=/usr/lib/jvm/java-11 ./bin/artemis create --silent --force --user artemis --password artemis --role amq --port-offset 0 --data ./data --allow-anonymous --no-autotune --verbose --name cluster1 --clustered --staticCluster tcp://localhost:61617 --max-hops 1 --queues testQueue lm1 $ JAVA_HOME=/usr/lib/jvm/java-11 ./bin/artemis create --silent --force --user artemis --password artemis --role amq --data ./data --allow-anonymous --no-autotune --verbose --name cluster2 --clustered --staticCluster tcp://localhost:61616 --max-hops 1 --port-offset 1 --queues testQueue lm2 Edit etc/broker.xml for both brokers, and add an entry redistribution-delay=0 in the '#' address mapping. Start the two broker instances in two different terminal sessions: $ cd lm1 $ JAVA_HOME=/usr/lib/jvm/java-11 ./bin/artemis run Verify that something is listening on port 61616, nothing so far on 61617 In another terminal session $ cd lm2 $ JAVA_HOME=/usr/lib/jvm/java-11 ./bin/artemis run Check we now have something listening on 61617. I'm doing these checks so I'm sure which broker I have to shut down -- I need to identify which terminal has the 'port 61616' broker and which the 'port 61617'. amqp.remoteURI=failover:(amqp://localhost:61616,amqp://localhost:61617)?failover.nested.transport.connectTimeout=1000 Unpack /home/kevin/Downloads/amqp-pubsub-local-20230215.zip, to reveal amqp-subscriber-local and amqp-publisher-local. Start two terminal sessions; in one, cd amqp-subscriber-local, in the other cd amqp-publisher-local. In both sessions, edit src/main/resources/application.properties, and set the broker URL as follows. The order of brokers must be the same in both cases, with 61616 first: amqp.remoteURI=failover:(amqp://localhost:61616,amqp://localhost:61617)?failover.nested.transport.connectTimeout=1000 In both the client terminals, run the corresponding application using $ JAVA_HOME=/usr/lib/jvm/java-11 mvn spring-boot:run It takes about a minute for the producer to start saying 'published xxx messages'. The consumer takes longer to do anything, because the default pre-fetch is so large. Eventually it will say 'Received xxx messages' When a few hundred messages have been consumed, ctrl-c the broker listening on 61616. Both the client programs will show a heap of error messages, but will eventually start reporting that they are producing and consuming again. And, again, the consumer takes longer to respond than the producer. When both producer and consumer are working again, restart the 61616 broker. Wait for this broker to show that it is fully started (that is, the log says 'Console available') ctrl-c the 61617 broker. Again there will be a slew of error messages from the clients. Wait for the clients both to report that they are sending and receiving messages, then restart the 61617 broker. In my tests, the producer application has published about 7000 messages by this point. Both brokers are now running. Wait until the producer application says 'Published 10000 messages', and the consumer says 'Received 10000 messages'. Check how many files remain in the large-messages/ directory. I invariably find at least a few in each broker, sometimes a few hundred. $ ls -l ../lm1/data/large-messages/|wc -l 962 $ ls -l ../lm2/data/large-messages/|wc -l 2 With the apache-artemis-2.29.0-SNAPSHOT-bin.zip release, some of the stuck files disappear on restarting the broker, but not all of them. After restarting both brokers: $ ls -l ../lm2/data/large-messages/|wc -l 1 $ ls -l ../lm2/data/large-messages/|wc -l 1 During these tests, I often see these messages in the consumer client: 023-03-15 08:14:11.147 INFO 872530 --- [-b63365205cd6:4] org.apache.qpid.jms.JmsSession : A JMS MessageConsumer has been closed: JmsConsumerInfo: { ID:2fb3b3df-c3c4-4c1d-aa93-b63365205cd6:4:14:1, destination = amqp-demo-queue } 2023-03-15 08:14:11.230 WARN 872530 --- [mqp-demo-queue]] c.c.j.DefaultJmsMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'amqp-demo-queue' - trying to recover. Cause: Remote did not respond to a drain request in time Sometimes when this happens, it's followed by messages about transactions being rolled back: javax.jms.TransactionRolledBackException: Commit failed, connection offline: readAddress(..) failed: Connection reset by peer I can't help thinking that these failures in the consumer are associated with the 'stuck files' problem in some way, but I can't be sure.