Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-6884

NullPointerException when performing Rolling Upgrade Procedure using Kubernetes

    XMLWordPrintable

Details

    • Bug
    • Resolution: Obsolete
    • Major
    • None
    • 9.0.0.Alpha3
    • Loaders and Stores, Server
    • None
    • Hide

      The procedure for OpenShift looks like the following:

      1. Start OpenShift cluster using:
        oc cluster up
        

        Note you are logged as a developer.

      2. Create new Infinispan cluster using standard configuration. Later on I will use REST interface for playing with data, so turn on compatibility mode:
        oc new-app slaskawi/infinispan-experiments
        
      3. Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
        oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
        oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
        oc label dc/infinispan-experiments cluster=cluster-1
        
      4. Scale up the deployment
        oc scale dc/infinispan-experiments --replicas=3
        
      5. Create a route to the service
        oc expose svc/infinispan-experiments
        
      6. Add some entries using REST
        curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
        
      7. Check if the entry is there
        curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
        
      8. Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
        212,214d214
        <                     <remote-store cache="default" hotrod-wrapping="false" read-only="true">
        <                         <remote-server outbound-socket-binding="remote-store-hotrod-server" />
        <                     </remote-store>
        449,451c449
        <             <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
        <             <!-- However DNS configuration with local cluster might be tricky -->
        <             <remote-destination host="172.30.14.112" port="11222"/>
        
      9. Spinning new cluster involves the following commands:
        oc new-app slaskawi/infinispan-experiments-2
        oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
        oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
        oc label dc/infinispan-experiments-2 cluster=cluster-2
        oc expose svc infinispan-experiments-2
        
      10. At this stage we have 2 clusters (the old one with selector cluster=cluster-1 and the new one with selector cluster=cluster-2). Both should be up and running (check that with oc status -v). Cluster-2 has remote stores which point to Cluster-1.
      11. Switch all the clients to the cluster=cluster-2. Depending on your configuration you probably want to create a new Route (if your clients connect to the cluster using Routes) or modify the Service.
      12. Fetch all remaining keys from cluster=cluster-1
        oc get pods --selector=deploymentconfig=infinispan-experiments-2
        .. write down 
        oc exec infinispan-experiments-2-3-pc7sg -- '/opt/jboss/infinispan-server/bin/ispn-cli.sh' '-c' '--controller=$(hostname -i):9990' '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=default:synchronize-data(migrator-name=hotrod)'
        
      Show
      The procedure for OpenShift looks like the following: Start OpenShift cluster using: oc cluster up Note you are logged as a developer. Create new Infinispan cluster using standard configuration. Later on I will use REST interface for playing with data, so turn on compatibility mode: oc new -app slaskawi/infinispan-experiments Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1 oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1 oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject oc label dc/infinispan-experiments cluster=cluster-1 Scale up the deployment oc scale dc/infinispan-experiments --replicas=3 Create a route to the service oc expose svc/infinispan-experiments Add some entries using REST curl -X POST -H 'Content-type: text/plain' -d 'test' http: //infinispan-experiments-myproject.192.168.0.17.xip.io/ rest / default /1 Check if the entry is there curl -X GET -H 'Content-type: text/plain' http: //infinispan-experiments-myproject.192.168.0.17.xip.io/ rest / default /1 Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition: 212,214d214 < <remote-store cache= " default " hotrod-wrapping= " false " read-only= " true " > < <remote-server outbound-socket-binding= "remote-store-hotrod-server" /> < </remote-store> 449,451c449 < <!-- If you have properly configured DNS, this could be a service name or even a Headless Service --> < <!-- However DNS configuration with local cluster might be tricky --> < <remote-destination host= "172.30.14.112" port= "11222" /> Spinning new cluster involves the following commands: oc new -app slaskawi/infinispan-experiments-2 oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2 oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject oc label dc/infinispan-experiments-2 cluster=cluster-2 oc expose svc infinispan-experiments-2 At this stage we have 2 clusters (the old one with selector cluster=cluster-1 and the new one with selector cluster=cluster-2 ). Both should be up and running (check that with oc status -v ). Cluster-2 has remote stores which point to Cluster-1. Switch all the clients to the cluster=cluster-2 . Depending on your configuration you probably want to create a new Route (if your clients connect to the cluster using Routes) or modify the Service. Fetch all remaining keys from cluster=cluster-1 oc get pods --selector=deploymentconfig=infinispan-experiments-2 .. write down oc exec infinispan-experiments-2-3-pc7sg -- '/opt/jboss/infinispan-server/bin/ispn-cli.sh' '-c' '--controller=$(hostname -i):9990' '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache= default :synchronize-data(migrator-name=hotrod)'

    Description

      During the Rolling Upgrade Procedure with compatibility caches on OpenShift I encountered weird NullPointerException.

      Below there are 2 logs from Source cluster:

      05:14:54,623 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-8) ISPN005022: Exception writing response with messageId=59: java.lang.NullPointerException
      	at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:353)
      	at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:343)
      	at scala.collection.immutable.List.foreach(List.scala:381)
      	at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:343)
      	at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
      	at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
      	at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
      	at io.netty.channel.DefaultChannelHandlerInvoker.invokeWrite(DefaultChannelHandlerInvoker.java:372)
      	at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:391)
      	at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:252)
      	at io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:241)
      	at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
      	at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:496)
      	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
      	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:279)
      	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
      	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
      	at java.lang.Thread.run(Thread.java:745)
      

      And from Destination:

      05:17:17,555 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) An exception was thrown by a user handler's exceptionCaught() method:: java.lang.NullPointerException
      	at org.jboss.resteasy.plugins.server.netty.RequestHandler.exceptionCaught(RequestHandler.java:91)
      	at io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64)
      	at io.netty.channel.DefaultChannelHandlerInvoker$5.run(DefaultChannelHandlerInvoker.java:117)
      	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
      	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
      	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
      	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
      	at java.lang.Thread.run(Thread.java:745)
      
      05:17:17,556 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) .. and the cause of the exceptionCaught() was:: java.io.IOException: Connection reset by peer
      	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
      	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
      	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
      	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
      	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
      	at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
      	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
      	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:245)
      	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
      	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
      	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
      	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
      	at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
      	at java.lang.Thread.run(Thread.java:745)
      

      Attachments

        Issue Links

          Activity

            People

              gfernand@redhat.com Gustavo Fernandes (Inactive)
              slaskawi@redhat.com Sebastian Ɓaskawiec (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: