Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-1160

fetchInMemoryState doesn't work without FLUSH protocol for udp

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 5.0.0.CR6
    • 5.0.0.CR4
    • None
    • None
    • Hide

      I have attached a java file that has a mainline. If you change the location of the configuration file to point to your infinispan distributed jgroups-udp.xml you should see that a new peer will get stuck trying to retrieve state from the coordinator and gets stuck retrying.

      Show
      I have attached a java file that has a mainline. If you change the location of the configuration file to point to your infinispan distributed jgroups-udp.xml you should see that a new peer will get stuck trying to retrieve state from the coordinator and gets stuck retrying.

      I was testing with a replicated cache in infinispan. And in an attempt to try it in 5.0CR4 I have found that I cannot use a replicated or invalidating cache (async or sync) that has fetchInMemoryState set to true unless I have FLUSH protocol provided with udp. I was able to reproduce this using the distributed jgroups-udp.xml file, which has FLUSH removed. When I tried with jgroups-tcp.xml it works without the FLUSH protocol as expected.

      I have attached the test java file that is only using infinispan and is very basic that reproduces it every time I try. I also will attach the log file from both the coordinator and the joining peer that shows this issue.

        1. infinispan.test
          176 kB
        2. infinispan2.test
          177 kB
        3. producer.txt
          96 kB
        4. receiver.txt
          96 kB
        5. TestInfinispan.java
          2 kB

              vblagoje Vladimir Blagojevic (Inactive)
              rpwburns William Burns (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: