Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-24417

[GSS](7.4.z) JBoss throws UnknownHostExceptions and XARecovery fails when Connected to an AMQ Cluster in OpenShift

    • False
    • None
    • False
    • Hide

      Configure JBoss messaging-activemq as a resource adapter targeting the load-balanced, node-port or headless routes for an AMQ cluster hosted in OpenShift:

              <subsystem xmlns="urn:jboss:domain:messaging-activemq:13.0">
                  <remote-connector name="netty-remote-throughput-node1" socket-binding="messaging-remote-throughput-node1">
                      <param name="ssl-enabled" value="true"/>
                  </remote-connector>
                  <remote-connector name="netty-remote-throughput-node2" socket-binding="messaging-remote-throughput-node2">
                      <param name="ssl-enabled" value="true"/>
                  </remote-connector>
                  <pooled-connection-factory name="activemq-ra-remote" entries="java:/RemoteJmsXA java:jboss/RemoteJmsXA java:/jms/DefaultJMSConnectionFactory" connectors="netty-remote-throughput-node1 netty-remote-throughput-node2" ha="false" reconnect-attempts="0" use-topology-for-load-balancing="false" transaction="xa" user="admin" password="admin" >
                      <inbound-config use-jndi="false" rebalance-connections="false" setup-attempts="-1" setup-interval="5000"/>
                  </pooled-connection-factory>
                  <external-jms-queue name="INBOUND_QUEUE" entries="java:/jms/queue/INBOUND_QUEUE"/>
                  <external-jms-queue name="OUTBOUND_QUEUE" entries="java:/jms/queue/OUTBOUND_QUEUE"/>
              </subsystem>
      

      Deploy an AMQ broker cluster on openshift and expose its endpoint(s) to external clients.

      Alternative: You can simulate the same failures by configuring the AMQ cluster outside openshift, but in a multihomed environment, using one interface for clustering (node1.internal.test.redhat.com) using plain communication, and another interface for client communication using ssl (sample configs attached). Configure /etc/hosts so that the internal interface is mapped on the broker hosts, but not on the host where JBoss is deployed.

      Start JBoss EAP and wait some time. You will see UnknownHostExceptions and XARecovery failures as the resource adapter attempts to contact the "internal" broker endpoints.

      Show
      Configure JBoss messaging-activemq as a resource adapter targeting the load-balanced, node-port or headless routes for an AMQ cluster hosted in OpenShift: <subsystem xmlns="urn:jboss:domain:messaging-activemq:13.0"> <remote-connector name="netty-remote-throughput-node1" socket-binding="messaging-remote-throughput-node1"> <param name="ssl-enabled" value="true"/> </remote-connector> <remote-connector name="netty-remote-throughput-node2" socket-binding="messaging-remote-throughput-node2"> <param name="ssl-enabled" value="true"/> </remote-connector> <pooled-connection-factory name="activemq-ra-remote" entries="java:/RemoteJmsXA java:jboss/RemoteJmsXA java:/jms/DefaultJMSConnectionFactory" connectors="netty-remote-throughput-node1 netty-remote-throughput-node2" ha="false" reconnect-attempts="0" use-topology-for-load-balancing="false" transaction="xa" user="admin" password="admin" > <inbound-config use-jndi="false" rebalance-connections="false" setup-attempts="-1" setup-interval="5000"/> </pooled-connection-factory> <external-jms-queue name="INBOUND_QUEUE" entries="java:/jms/queue/INBOUND_QUEUE"/> <external-jms-queue name="OUTBOUND_QUEUE" entries="java:/jms/queue/OUTBOUND_QUEUE"/> </subsystem> Deploy an AMQ broker cluster on openshift and expose its endpoint(s) to external clients. Alternative: You can simulate the same failures by configuring the AMQ cluster outside openshift, but in a multihomed environment, using one interface for clustering (node1.internal.test.redhat.com) using plain communication, and another interface for client communication using ssl (sample configs attached). Configure /etc/hosts so that the internal interface is mapped on the broker hosts, but not on the host where JBoss is deployed. Start JBoss EAP and wait some time. You will see UnknownHostExceptions and XARecovery failures as the resource adapter attempts to contact the "internal" broker endpoints.

      When the JBoss EAP is running outside OpenShift with its messaging-activemq subsystem iconfigured as a resource adapter connecting to an AMQ cluster within OpenShift, we see UnknownHostExceptions and XARecovery failures due to topology updates, even if useTopologyForLoadBalancing is set to false and even if ha is also set false.

            [JBEAP-24417] [GSS](7.4.z) JBoss throws UnknownHostExceptions and XARecovery fails when Connected to an AMQ Cluster in OpenShift

            As all verified issues should change to closed status, closing them with the bulk update.

            Michaela Osmerova added a comment - As all verified issues should change to closed status, closing them with the bulk update.

            Verified with EAP 7.4.15.GA-CR1

            Peter Mackay added a comment - Verified with EAP 7.4.15.GA-CR1

            The fix is not in Artemis/ AMQ but in artemis-wildfly-integration. I'm going to release 1.0.8 with the correct fix

            Emmanuel Hugonnet added a comment - The fix is not in Artemis/ AMQ but in artemis-wildfly-integration. I'm going to release 1.0.8 with the correct fix

            Tomas Hofman added a comment - - edited

            ENTMQBR-7541 is not an upstream issue for this (should be closed).

            Tomas Hofman added a comment - - edited ENTMQBR-7541 is not an upstream issue for this (should be closed).

            Emmanuel Hugonnet added a comment - https://github.com/rh-messaging/artemis-wildfly-integration/commit/5f5ec7a3e3c9b3993326bc07398d1799f66ac329 and https://github.com/rh-messaging/artemis-wildfly-integration/commit/172cd1f2e994dd93c35406a71fb3eb7c943ea613

            The issue is that we would use the topology to get the backup address which is wrong behind a proxy or load balancer.
            WFLY18212 is somewhat related as we didn't update the recovery options on topology change (such as having a backup server starting after the initial setup or restarting the live server after a failover)

            Emmanuel Hugonnet added a comment - The issue is that we would use the topology to get the backup address which is wrong behind a proxy or load balancer. WFLY18212 is somewhat related as we didn't update the recovery options on topology change (such as having a backup server starting after the initial setup or restarting the live server after a failover)

            rhn-support-dhawkins - Id ask rhn-engineering-rhusar or pferraro@redhat.com as AFAIR they dove into this kind of issue on OC at some point.

            Bartosz Baranowski added a comment - rhn-support-dhawkins - Id ask rhn-engineering-rhusar or pferraro@redhat.com as AFAIR they dove into this kind of issue on OC at some point.

              istudens@redhat.com Ivo Studensky
              rhn-support-dhawkins Duane Hawkins
              Votes:
              1 Vote for this issue
              Watchers:
              13 Start watching this issue

                Created:
                Updated:
                Resolved: