Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-8407

Artemis is logging warnings during clean shutdown of server in cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • None
    • None
    • None
    • False
    • None
    • False
    • Hide

      None

      Show
      None
    • Hide

      Steps to reproduce:

      git clone -b master git@gitlab.mw.lab.eng.bos.redhat.com:mnovak/messaging-testsuite.git messaging-testsuite
      cd messaging-testsuite/scripts/
      
      groovy -DEAP_ZIP_URL=https://jenkins.eapqe.psi.redhat.com/job/eap-8.x-messaging-testing-prepare/1906/artifact/jboss-eap.zip PrepareServers7.groovy
      export WORKSPACE=$PWD
      export JBOSS_HOME_1=$WORKSPACE/server1/jboss-eap
      export JBOSS_HOME_2=$WORKSPACE/server2/jboss-eap
      export JBOSS_HOME_3=$WORKSPACE/server3/jboss-eap
      export JBOSS_HOME_4=$WORKSPACE/server4/jboss-eap
      
      cd ../jboss-hornetq-testsuite/
      mvn clean install -B -Dartemis.version=2.26.0 -Deap7.org.jboss.qa.hornetq.apps.clients.version=8.1692999002-SNAPSHOT -DreuseForks=false -Dmaven.test.failure.ignore=true -Deap7.clients.version=8.1692999002-SNAPSHOT -Dsurefire.failIfNoSpecifiedTests=false -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dtest=ClusterTestCase#testNoWarningErrorsDuringRestartingNodesInCluster | tee log
      
      Show
      Steps to reproduce: git clone -b master git@gitlab.mw.lab.eng.bos.redhat.com:mnovak/messaging-testsuite.git messaging-testsuite cd messaging-testsuite/scripts/ groovy -DEAP_ZIP_URL=https: //jenkins.eapqe.psi.redhat.com/job/eap-8.x-messaging-testing-prepare/1906/artifact/jboss-eap.zip PrepareServers7.groovy export WORKSPACE=$PWD export JBOSS_HOME_1=$WORKSPACE/server1/jboss-eap export JBOSS_HOME_2=$WORKSPACE/server2/jboss-eap export JBOSS_HOME_3=$WORKSPACE/server3/jboss-eap export JBOSS_HOME_4=$WORKSPACE/server4/jboss-eap cd ../jboss-hornetq-testsuite/ mvn clean install -B -Dartemis.version=2.26.0 -Deap7.org.jboss.qa.hornetq.apps.clients.version=8.1692999002-SNAPSHOT -DreuseForks= false -Dmaven.test.failure.ignore= true -Deap7.clients.version=8.1692999002-SNAPSHOT -Dsurefire.failIfNoSpecifiedTests= false -Dmaven.wagon.http.ssl.insecure= true -Dmaven.wagon.http.ssl.allowall= true -Dtest=ClusterTestCase#testNoWarningErrorsDuringRestartingNodesInCluster | tee log

      When shutting down the server in cluster, Artemis is sometimes logging warnings:

      21:54:05,122 WARN  [org.apache.activemq.artemis.core.server] (Thread-0 (ActiveMQ-client-netty-threads)) AMQ222295: There is a possible split brain on nodeID db71c866-4391-11ee-863c-fa163e50fa38. Topology update ignored
      
      22:38:32,250 WARN  [org.apache.activemq.artemis.core.client] (Thread-3 (ActiveMQ-scheduled-threads)) AMQ212064: Unable to receive cluster topology : java.lang.InterruptedException
              at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081)
              at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369)
              at java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.waitForTopology(ClientSessionFactoryImpl.java:525)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:741)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:549)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:528)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.server.cluster.ClusterController$ConnectRunnable.run(ClusterController.java:497)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at org.apache.activemq.artemis.journal//org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
      
      22:38:06,362 WARN  [org.apache.activemq.artemis.core.client] (Thread-20 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@120a57f2)) AMQ212004: Failed to connect to server.
      
      22:38:16,533 WARN  [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 39) AMQ222002: Timed out waiting for pool to terminate org.apache.activemq.artemis.utils.ActiveMQThreadPoolExecutor@69043f7c[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 204]. Interrupting all its threads!
      
      22:38:16,533 WARN  [org.apache.activemq.artemis.core.client] (Thread-4 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@120a57f2)) AMQ212064: Unable to receive cluster topology : java.lang.InterruptedException
              at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081)
              at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369)
              at java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.waitForTopology(ClientSessionFactoryImpl.java:525)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:741)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:549)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:528)
              at org.apache.activemq.artemis@2.21.0.redhat-00044//org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:491)
              at org.apache.activemq.artemis.journal//org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:56)
              at org.apache.activemq.artemis.journal//org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
              at org.apache.activemq.artemis.journal//org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:67)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at org.apache.activemq.artemis.journal//org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
      

      Expectation is that during clean shutdown no warnings/errors will be logged.

      Customer Impact: All warnings are harmless but restarting EAP server is normal admin operation and server should not log any warning/error. Log monitoring might cause false alarm on customer side.

              ehugonne1@redhat.com Emmanuel Hugonnet
              ehugonne1@redhat.com Emmanuel Hugonnet
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: