Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-7232

NPE thrown when attempting to remove entry with xsite. RHDG 8.4.6.

XMLWordPrintable

    • False
    • None
    • False
    • Workaround Exists
    • Hide

      Possible workaround is to use SYNC for cache storage for the problem cache.

      Show
      Possible workaround is to use SYNC for cache storage for the problem cache.

      There is a primary cluster with multiple caches all configured from the same template in RHDG 8.4.6.

      When the backup cluster is brought online most of the caches are backed up fine except this one cache and only some entries in the cache cause an issue.

      The issue is seen in their secondary cluster logs, as a repeated error showing a NullPointerException. Below is the stack trace:

       

      2024-07-20 08:18:20,222 ERROR (jgroups-1426,xsite,node1:2) [org.infinispan.interceptors.impl.InvocationContextInterceptor] ISPN000136: Error executing command IracPutKeyValueCommand on Cache 'DataCache', writing keys [WrappedByteArray[\B\F\0\0 (13 bytes)]] java.lang.NullPointerException: Cannot invoke "Object.getClass()" because "o" is null
      at org.infinispan.interceptors.impl.IsMarshallableInterceptor.throwNotSerializable(IsMarshallableInterceptor.java:144)
      at org.infinispan.interceptors.impl.IsMarshallableInterceptor.checkMarshallable(IsMarshallableInterceptor.java:135)
      at org.infinispan.interceptors.impl.IsMarshallableInterceptor.visitIracPutKeyValueCommand(IsMarshallableInterceptor.java:71)
      at org.infinispan.commands.write.IracPutKeyValueCommand.acceptVisitor(IracPutKeyValueCommand.java:109)
      at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:59)
      at org.infinispan.interceptors.DDAsyncInterceptor.handleDefault(DDAsyncInterceptor.java:54)
      at org.infinispan.interceptors.DDAsyncInterceptor.visitIracPutKeyValueCommand(DDAsyncInterceptor.java:105)
      at org.infinispan.commands.write.IracPutKeyValueCommand.acceptVisitor(IracPutKeyValueCommand.java:109)
      at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:128)
      at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:90)
      at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invokeAsync(AsyncInterceptorChainImpl.java:220)
      at org.infinispan.cache.impl.InvocationHelper.doInvokeAsync(InvocationHelper.java:318)
      at org.infinispan.cache.impl.InvocationHelper.invokeAsync(InvocationHelper.java:156)
      at org.infinispan.xsite.ClusteredCacheBackupReceiver.removeKey(ClusteredCacheBackupReceiver.java:221)
      at org.infinispan.xsite.commands.remote.IracPutManyRequest$Remove.execute(IracPutManyRequest.java:155)
      at org.infinispan.xsite.commands.remote.IracPutManyRequest.executeOperation(IracPutManyRequest.java:61)
      at org.infinispan.xsite.commands.remote.IracUpdateKeyRequest.invokeInLocalCache(IracUpdateKeyRequest.java:24)
      at org.infinispan.xsite.commands.remote.XSiteCacheRequest.invokeInLocalSite(XSiteCacheRequest.java:47)
      at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromRemoteSite(GlobalInboundInvocationHandler.java:93)
      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1543)
      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1465)
      at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1642)
      at org.jgroups.JChannel.up(JChannel.java:733)
      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:936)
      at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:460)
      at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:352)
      at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:332)
      at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:285)
      at org.jgroups.protocols.relay.Relayer2$Bridge.receive(Relayer2.java:135)
      at org.jgroups.JChannel.up(JChannel.java:736)
      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:936)
      at org.jgroups.protocols.FRAG2.up(FRAG2.java:139)
      at org.jgroups.protocols.FlowControl.up(FlowControl.java:253)
      at org.jgroups.protocols.FlowControl.up(FlowControl.java:261)
      at org.jgroups.protocols.pbcast.GMS.up(GMS.java:845)
      at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:226)
      at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1083)
      at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:822)
      at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:804)
      at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:453)
      at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:680)
      at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:105)
      at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:180)
      at org.jgroups.protocols.FD_SOCK2.up(FD_SOCK2.java:188)
      at org.jgroups.protocols.MERGE3.up(MERGE3.java:274)
      at org.jgroups.protocols.Discovery.up(Discovery.java:294)
      at org.jgroups.stack.Protocol.up(Protocol.java:340)
      at org.jgroups.protocols.TP.passMessageUp(TP.java:1184)
      at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:107)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
      at java.base/java.lang.Thread.run(Thread.java:833)

      What they have found different with this one cache is that it is the only cache where their process can make a call to remove an entry from the cache.

              pruivo@redhat.com Pedro Ruivo
              rhn-support-dstephan David Stephan
              Anna Manukyan Anna Manukyan
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: