Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-12831

ShutdownCacheCommand{cacheName='rest'} java.lang.NullPointerException

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • None
    • 12.0.2.Final
    • Server
    • None
    • Undefined

      When a server receives a cluster shutdown command from another server.
      The root cause is different XML cache configurations from one server to another.
      I wish to have a graceful shutdown.

      14:21:42,379 WARN  (jgroups-11,dlovison-mac-40349) [org.infinispan.CLUSTER] ISPN000071: Caught exception when handling command ShutdownCacheCommand{cacheName='rest'} java.lang.NullPointerException
              at org.infinispan.topology.LocalTopologyManagerImpl.writeCHState(LocalTopologyManagerImpl.java:700)
              at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:690)
              at org.infinispan.commands.topology.CacheShutdownCommand.invokeAsync(CacheShutdownCommand.java:36)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:252)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:174)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:113)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1383)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1307)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1450)
              at org.jgroups.JChannel.up(JChannel.java:784)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
              at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:359)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:351)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
              at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:392)
              at org.jgroups.protocols.pbcast.NAKACK2.deliver(NAKACK2.java:931)
              at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:821)
              at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:602)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
              at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
              at org.jgroups.protocols.Discovery.up(Discovery.java:300)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
              at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:834)
      
      14:22:43,118 WARN  (jgroups-19,dlovison-mac-40349) [org.infinispan.CLUSTER] ISPN000071: Caught exception when handling command ShutdownCacheCommand{cacheName='hotrodDistTx'} java.lang.NullPointerException
              at org.infinispan.topology.LocalTopologyManagerImpl.writeCHState(LocalTopologyManagerImpl.java:700)
              at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:690)
              at org.infinispan.commands.topology.CacheShutdownCommand.invokeAsync(CacheShutdownCommand.java:36)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:252)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:174)
              at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:113)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1383)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1307)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1450)
              at org.jgroups.JChannel.up(JChannel.java:784)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:913)
              at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:359)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:351)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
              at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:392)
              at org.jgroups.protocols.pbcast.NAKACK2.deliver(NAKACK2.java:931)
              at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:821)
              at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:602)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:186)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
              at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
              at org.jgroups.protocols.Discovery.up(Discovery.java:300)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
              at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:834)
      
      14:23:15,325 INFO  (VERIFY_SUSPECT.TimerThread-21,dlovison-mac-40349) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel cluster: [dlovison-mac-40349|2] (1) [dlovison-mac-40349]
      14:23:15,336 INFO  (VERIFY_SUSPECT.TimerThread-21,dlovison-mac-40349) [org.infinispan.CLUSTER] ISPN100001: Node dlovison-mac-37316 left the cluster
      14:23:15,358 INFO  (non-blocking-thread--p2-t14) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,358 INFO  (non-blocking-thread--p2-t1) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,358 INFO  (non-blocking-thread--p2-t8) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,358 INFO  (non-blocking-thread--p2-t6) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-40349] with topology id 2
      14:23:15,358 ERROR (non-blocking-thread--p2-t16) [org.infinispan.topology.ClusterCacheStatus] ISPN000228: Failed to recover cache ___protobuf_metadata state after the current node became the coordinator org.infinispan.topology.CacheJoinException: ISPN000409: Node dlovison-mac-40349 without persistent state attempting to join cache ___protobuf_metadata on cluster with state
              at org.infinispan.topology.ClusterCacheStatus.addMember(ClusterCacheStatus.java:222)
              at org.infinispan.topology.ClusterCacheStatus.addMembers(ClusterCacheStatus.java:670)
              at org.infinispan.topology.ClusterCacheStatus.recoverMembers(ClusterCacheStatus.java:650)
              at org.infinispan.topology.ClusterCacheStatus.doMergePartitions(ClusterCacheStatus.java:628)
              at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$recoverClusterStatus$6(ClusterTopologyManagerImpl.java:439)
              at java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
              at org.infinispan.executors.LimitedExecutor.actualRun(LimitedExecutor.java:192)
              at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:176)
              at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:38)
              at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:237)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:834)
      
      14:23:15,360 INFO  (non-blocking-thread--p2-t16) [org.infinispan.CLUSTER] [Context=security-filestore]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,374 INFO  (non-blocking-thread--p2-t16) [org.infinispan.CLUSTER] [Context=security-filestore]ISPN100008: Updating cache members list [dlovison-mac-40349], topology id 7
      14:23:15,374 INFO  (non-blocking-thread--p2-t1) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100008: Updating cache members list [dlovison-mac-40349], topology id 7
      14:23:15,381 INFO  (non-blocking-thread--p2-t6) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,381 INFO  (non-blocking-thread--p2-t8) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100008: Updating cache members list [dlovison-mac-40349], topology id 7
      14:23:15,381 INFO  (non-blocking-thread--p2-t14) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100008: Updating cache members list [dlovison-mac-40349], topology id 7
      14:23:15,383 WARN  (non-blocking-thread--p2-t1) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN000320: After merge (or coordinator change), cache still hasn't recovered a majority of members and must stay in degraded mode. Current members are [dlovison-mac-40349], lost members are [dlovison-mac-37316], stable members are [dlovison-mac-37316, dlovison-mac-40349]
      14:23:15,383 INFO  (non-blocking-thread--p2-t1) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100007: After merge (or coordinator change), recovered members [dlovison-mac-37316, dlovison-mac-40349] with topology id 6
      14:23:15,384 INFO  (non-blocking-thread--p2-t1) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100011: Entering availability mode DEGRADED_MODE, topology id 7
      14:23:15,385 INFO  (non-blocking-thread--p2-t6) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100008: Updating cache members list [dlovison-mac-40349], topology id 7
      
      
      

            Unassigned Unassigned
            dlovison@redhat.com Diego Lovison
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: