Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-674

JWS session externalization: randon request failures due to InvalidMagicIdException

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Icon: Major Major
    • None
    • JDG 7.1.0 ER1
    • None

      HotRodSessionManager randomly fails with org.infinispan.server.hotrod.InvalidMagicIdException.
      When using persistence strategy FINE, failures seems to happen more often.

      Full exception trace on client:

      org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=82 returned server error (status=0x81): org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 0
      	org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
      	org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132)
      	org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118)
      	org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56)
      	org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.returnVersionedOperationResponse(AbstractKeyOperation.java:63)
      	org.infinispan.client.hotrod.impl.operations.RemoveIfUnmodifiedOperation.executeOperation(RemoveIfUnmodifiedOperation.java:41)
      	org.infinispan.client.hotrod.impl.operations.RemoveIfUnmodifiedOperation.executeOperation(RemoveIfUnmodifiedOperation.java:19)
      	org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:54)
      	org.infinispan.client.hotrod.impl.RemoteCacheImpl.removeWithVersion(RemoteCacheImpl.java:110)
      	org.infinispan.client.hotrod.impl.InvalidatedNearRemoteCache.removeWithVersion(InvalidatedNearRemoteCache.java:85)
      	org.wildfly.clustering.web.hotrod.session.HotRodSessionMetaDataFactory.remove(HotRodSessionMetaDataFactory.java:101)
      	org.wildfly.clustering.web.hotrod.session.HotRodSessionMetaDataFactory.remove(HotRodSessionMetaDataFactory.java:38)
      	org.wildfly.clustering.web.hotrod.session.HotRodSessionFactory.findValue(HotRodSessionFactory.java:67)
      	org.wildfly.clustering.web.hotrod.session.HotRodSessionFactory.findValue(HotRodSessionFactory.java:35)
      	org.wildfly.clustering.web.hotrod.session.HotRodSessionManager.findSession(HotRodSessionManager.java:108)
      	org.wildfly.clustering.tomcat.session.DistributableManager.findSession(DistributableManager.java:167)
      	org.wildfly.clustering.tomcat.hotrod.HotRodManager.findSession(HotRodManager.java:236)
      	org.apache.catalina.connector.Request.doGetSession(Request.java:2891)
      	org.apache.catalina.connector.Request.getSessionInternal(Request.java:2551)
      

      Corresponding exception on JDG server:

      13:11:56,185 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRodServerWorker-6-6) ISPN005003: Exception reported: org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 0
              at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.java:175)
              at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:130)
              at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:88)
              at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
              at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
              at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
              at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
              at org.infinispan.server.core.transport.StatsChannelHandler.channelRead(StatsChannelHandler.java:28)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
              at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
              at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
              at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
              at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:972)
              at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:386)
              at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302)
              at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
              at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
              at java.lang.Thread.run(Thread.java:745)
      
      13:11:56,187 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRodServerWorker-6-6) ISPN005003: Exception reported: io.netty.channel.unix.Errors$NativeIoException: syscall:read(...)() failed: Connection reset by peer
              at io.netty.channel.unix.FileDescriptor.readAddress(...)(Unknown Source)
      

              pferraro@redhat.com Paul Ferraro
              vjuranek@redhat.com Vojtech Juranek
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: