Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-1521

Management Console will not work if there are multiple cache-containers and or endpoints

    XMLWordPrintable

Details

    • ER3
    • Hide

      Used a standard clustered.xml (7.1.1)
      Step 1 - add container:
      <cache-container name="clustered2" default-cache="mycache" statistics="true">
      <transport lock-timeout="60000"/>
      <global-state/>
      <distributed-cache name="mycache"/>
      </cache-container>
      Step 2 - add connector and binding:
      <hotrod-connector socket-binding="hotrod" cache-container="clustered">
      <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>
      </hotrod-connector>
      <hotrod-connector name="hr2" socket-binding="hotrod2" cache-container="clustered2">
      <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/>
      </hotrod-connector>

      <socket-binding name="hotrod2" port="11333"/>

      Step 3 - add a name for the hotrod connector:
      <hotrod-connector name="hr1" socket-binding="hotrod" ....

      Show
      Used a standard clustered.xml (7.1.1) Step 1 - add container: <cache-container name="clustered2" default-cache="mycache" statistics="true"> <transport lock-timeout="60000"/> <global-state/> <distributed-cache name="mycache"/> </cache-container> Step 2 - add connector and binding: <hotrod-connector socket-binding="hotrod" cache-container="clustered"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <hotrod-connector name="hr2" socket-binding="hotrod2" cache-container="clustered2"> <topology-state-transfer lazy-retrieval="false" lock-timeout="1000" replication-timeout="5000"/> </hotrod-connector> <socket-binding name="hotrod2" port="11333"/> Step 3 - add a name for the hotrod connector: <hotrod-connector name="hr1" socket-binding="hotrod" ....

    Description

      If a second cache contiainer is configured with the necesarry endpoint the management console shows

      1.) wrong information
      If the endpoint is not configured or the (1. default) enpoint does not use the name attribute
      -> both containers shows the same enpoint configuration which is wrong

      2.) blank page
      If both endpoints use a name attribute the console is not longer working

      Attachments

        Issue Links

          Activity

            People

              vblagoje Vladimir Blagojevic (Inactive)
              rhn-support-wfink Wolf Fink
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: