Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-616

NullPointerException in Multiplexer.java

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.6
    • 2.6
    • None

      When starting 4 servers, which each join 8 different ReplicatedHashMaps over a multiplexed channel, we frequently get this;

      2007-11-03 09:39:48,774 ERROR [STREAMING_STATE_TRANSFER sender,udp,192.168.164.227:33709] log.GeronimoLog (GeronimoLog.java:108) - failed returning the application state, will return null
      java.lang.IllegalArgumentException: State provider 192.168.164.227:33709 does not have service with id space
      at org.jgroups.mux.Multiplexer.handleStateRequest(Multiplexer.java:640)
      at org.jgroups.mux.Multiplexer.up(Multiplexer.java:365)
      at org.jgroups.JChannel.up(JChannel.java:1147)
      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:341)
      at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:428)
      at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER$StateProviderHandler.process(STREAMING_STATE_TRANSFER.java:731)
      at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER$StateProviderThreadSpawner$1.run(STREAMING_STATE_TRANSFER.java:648)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
      at java.lang.Thread.run(Thread.java:619)

      This is using the following stacks.xml;

      <protocol_stacks>
      <stack name="udp">
      <config>
      <UDP
      mcast_addr="${jgroups.udp.mcast_addr}"
      mcast_port="${jgroups.udp.mcast_port}"
      tos="8"
      ucast_recv_buf_size="20000000"
      ucast_send_buf_size="640000"
      mcast_recv_buf_size="25000000"
      mcast_send_buf_size="640000"
      loopback="false"
      discard_incompatible_packets="true"
      max_bundle_size="64000"
      max_bundle_timeout="30"
      use_incoming_packet_handler="true"
      ip_ttl="${jgroups.udp.ip_ttl:32}"
      enable_bundling="true"
      enable_diagnostics="false"
      thread_naming_pattern="cl"
      use_concurrent_stack="true"
      thread_pool.enabled="true"
      thread_pool.min_threads="1"
      thread_pool.max_threads="25"
      thread_pool.keep_alive_time="5000"
      thread_pool.queue_enabled="false"
      thread_pool.queue_max_size="100"
      thread_pool.rejection_policy="Run"
      oob_thread_pool.enabled="true"
      oob_thread_pool.min_threads="1"
      oob_thread_pool.max_threads="8"
      oob_thread_pool.keep_alive_time="5000"
      oob_thread_pool.queue_enabled="false"
      oob_thread_pool.queue_max_size="100"
      oob_thread_pool.rejection_policy="Run"/>
      <PING timeout="${jgroups.ping.timeout:15000}"
      num_initial_members="${jgroups.ping.num_initial_members:32}"/>
      <MERGE2 max_interval="30000"
      min_interval="10000"/>
      <FD_SOCK/>
      <FD timeout="10000" max_tries="5" shun="true"/>
      <VERIFY_SUSPECT timeout="1500" />
      <BARRIER />
      <pbcast.NAKACK
      use_mcast_xmit="false" gc_lag="10"
      retransmit_timeout="300,600,1200,2400,4800"
      discard_delivered_msgs="true"/>
      <UNICAST timeout="300,600,1200,2400,3600"/>
      <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
      max_bytes="400000"/>
      <VIEW_SYNC avg_send_interval="60000" />
      <pbcast.GMS print_local_addr="false" join_timeout="10000"
      join_retry_timeout="2000" shun="true"
      view_bundling="true" view_ack_collection_timeout="5000"/>
      <FC max_credits="20000000"
      min_threshold="0.10"/>
      <FRAG2 frag_size="60000" />
      <pbcast.STREAMING_STATE_TRANSFER />
      <pbcast.FLUSH timeout="0"/>
      </config>
      </stack>
      </protocol_stacks>

              vblagoje Vladimir Blagojevic (Inactive)
              rnewson_jira Robert Newson (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: