Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-399

State transfer requests for non existing caches should not throw exception

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 4.1.0.ALPHA3
    • 4.0.0.Final, 4.1.0.ALPHA2
    • State Transfer
    • None

      I think this is my lucky week. Here comes another puzzle:

      I'm trying to prototype having a separate cache for Hot Rod topology view information and here's the sequence of events that happen:

      1. Start 1st Hot Rod server which also starts up the Hot Rod topology cache (cache name = ___hotRodTopologyCache). JGroups local address is: eq-11980:
      2010-04-13 17:13:36,056 3881 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (main Cache local address is eq-11980, physical addresses are [127.0.0.1:7900]
      2010-04-13 17:13:36,056 3881 TRACE [org.infinispan.factories.GlobalComponentRegistry] (main Registering a shutdown hook. Configured behavior = DEFAULT
      2010-04-13 17:13:36,057 3882 INFO [org.infinispan.factories.GlobalComponentRegistry] (main Infinispan version: Infinispan 'Starobrno' 4.1.0.SNAPSHOT
      2010-04-13 17:13:36,057 3882 TRACE [org.infinispan.factories.GlobalComponentRegistry] (main Named component register, put org.infinispan.factories.ComponentRegistry@1d6fbb3 under ___hotRodTopologyCache in {}@ee6ad6

      2. Start 2nd Hot Rod server which also starts up the Hot Rod topology cache (cache name = ___hotRodTopologyCache). JGroups local address is: eq-54009
      2010-04-13 17:13:36,474 4299 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (main Cache local address is eq-54009, physical addresses are [127.0.0.1:7901]
      2010-04-13 17:13:36,474 4299 TRACE [org.infinispan.factories.GlobalComponentRegistry] (main Registering a shutdown hook. Configured behavior = DEFAULT
      2010-04-13 17:13:36,474 4299 INFO [org.infinispan.factories.GlobalComponentRegistry] (main Infinispan version: Infinispan 'Starobrno' 4.1.0.SNAPSHOT
      2010-04-13 17:13:36,474 4299 TRACE [org.infinispan.factories.GlobalComponentRegistry] (main Named component register, put org.infinispan.factories.ComponentRegistry@1de007d under ___hotRodTopologyCache in {}@b5ad68

      3. Since topology caches are configured with fetch in memory state, when eq-54009 starts, it requests eq-11980 to generate state:
      2010-04-13 17:13:36,501 4326 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (STREAMING_STATE_TRANSFER-sender-1,Infinispan-Cluster,eq-11980 Received request to generate state for cache named '___hotRodTopologyCache'. Attempting to generate state.

      4. Now, let's send a put command to server in eq-11980 for brand new named cache 'hotRodReplSync'
      2010-04-13 17:13:36,752 4577 TRACE [org.infinispan.server.hotrod.RequestResolver$] (HotRodWorker-1-1 Operation code: 1 has been matched to Some(PutRequest)
      2010-04-13 17:13:36,754 4579 TRACE [org.infinispan.server.hotrod.HotRodDecoder$] (HotRodWorker-1-1 Decoded header HotRodHeader

      {op=PutRequest, messageId=1, cacheName=hotRodReplSync, flag=NoFlag, clientIntelligence=0, topologyId=0}

      5. Now eq-11980 attempts to fetch state from eq-54009 for cache 'hotRodReplSync':
      2010-04-13 17:13:36,790 4615 INFO [org.infinispan.remoting.rpc.RpcManagerImpl] (HotRodWorker-1-1 Trying to fetch state from eq-54009
      2010-04-13 17:13:36,792 4617 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,Infinispan-Cluster,eq-11980 Received state for cache named 'hotRodReplSync'. Attempting to apply state.
      2010-04-13 17:13:36,792 4617 TRACE [org.infinispan.factories.GlobalComponentRegistry] (Incoming-1,Infinispan-Cluster,eq-11980 Named component register, get hotRodReplSync from {___hotRodTopologyCache=org.infinispan.factories.ComponentRegistry@1d6fbb3, hotRodReplSync=org.infinispan.factories.ComponentRegistry@19c5048}@ee6ad6
      2010-04-13 17:13:36,792 4617 DEBUG [org.infinispan.statetransfer.StateTransferManagerImpl] (Incoming-1,Infinispan-Cluster,eq-11980 Applying state
      2010-04-13 17:13:36,792 4617 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (STREAMING_STATE_TRANSFER-sender-1,Infinispan-Cluster,eq-54009 Received request to generate state for cache named 'hotRodReplSync'. Attempting to generate state.

      6. But eq-54009 does not have a 'hotRodReplSync' cache yet cos no requests have been sent to the 2nd (eq-54009) server yet and hence fails with:
      2010-04-13 17:13:36,792 4617 INFO [org.infinispan.remoting.InboundInvocationHandlerImpl] (STREAMING_STATE_TRANSFER-sender-1,Infinispan-Cluster,eq-54009 Cache named hotRodReplSync does not exist on this cache manager!
      2010-04-13 17:13:36,794 4619 ERROR [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (STREAMING_STATE_TRANSFER-sender-1,Infinispan-Cluster,eq-54009 Caught while responding to state transfer request
      org.infinispan.statetransfer.StateTransferException: Cache named hotRodReplSync does not exist on this cache manager!
      at org.infinispan.remoting.InboundInvocationHandlerImpl.getStateTransferManager(InboundInvocationHandlerImpl.java:85)
      at org.infinispan.remoting.InboundInvocationHandlerImpl.generateState(InboundInvocationHandlerImpl.java:77)
      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.getState(JGroupsTransport.java:586)

      IMO, 6 is a valid case and should not throw an error. As you can see here, a server might not have had any requests for a cache yet, so might not have that cache started yet. The most reasonable thing for eq-54009 to do here is not to serve any state instead of throwing an error.

        1. infinispan.log
          217 kB
          Galder Zamarreño
        2. Preliminary_fix.patch
          7 kB
          Galder Zamarreño

              rh-ee-galder Galder Zamarreño
              rh-ee-galder Galder Zamarreño
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

                Created:
                Updated:
                Resolved: