Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-1135

REST requests through haproxy cause an INFO stacktrace when connection is closed

    XMLWordPrintable

Details

      • Start JDG7.1 with rest-endpoint enabled and default cache
      • Configure and start haproxy
      • Make REST call through haproxy

    Description

      A REST call through haproxy (see config below) causes the following stacktrace when haproxy closes the connection:

      13:20:08,834 INFO [org.infinispan.rest.embedded.netty4.i18n] (nioEventLoopGroup-9-1) RESTEASY018512: Exception caught by handler: java.io.IOException: Connection reset by peer
      at sun.nio.ch.FileDispatcherImpl.read0(Native Method) [rt.jar:1.8.0_141]
      at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) [rt.jar:1.8.0_141]
      at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) [rt.jar:1.8.0_141]
      at sun.nio.ch.IOUtil.read(IOUtil.java:192) [rt.jar:1.8.0_141]
      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) [rt.jar:1.8.0_141]
      at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:367) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:118) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [netty-all-4.1.8.Final.jar:4.1.8.Final]
      at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_141]

      --------------------------------------------------------------------

      [root@ose3x-base conf]# cat haproxy.config
      global
      maxconn 4096
      pidfile /var/run/haproxy.pid
      daemon

      defaults
      mode http
      retries 3
      option redispatch
      maxconn 2000
      timeout connect 5000
      timeout client 50000
      timeout server 50000

      frontend public
      bind :80
      mode http
      tcp-request inspect-delay 5s
      tcp-request content accept if HTTP

      1. Remove port from Host header
        http-request replace-header Host (.):. \1

      default_backend jdg_default

      backend openshift_default
      mode http
      option forwardfor
      #option http-keep-alive
      option http-pretend-keepalive

      1. To configure custom default errors, you can either uncomment the
      2. line below (server ... 127.0.0.1:8080) and point it to your custom
      3. backend service or alternatively, you can send a custom 503 error.
        #server openshift_backend 127.0.0.1:8080
        errorfile 503 /var/lib/haproxy/conf/error-page-503.http

      backend jdg_default

      balance roundrobin

      timeout check 5000ms

      server node1 127.0.0.1:8080

      Attachments

        Issue Links

          Activity

            People

              rh-ee-galder Galder ZamarreƱo
              wdecoste1@redhat.com William Decoste (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: