-
Bug
-
Resolution: Done
-
Major
-
None
-
None
When setting up the a federation between 2 cluster A and B on Openshift, the federation client tries to reach the distant server using the internal service domain name rather than the configured public route. This is probably due to the option useTopologyForLoadBalancing that is not settable on the federation client. After initial connection is established through the Openshift public route, it loops indefinitely trying to reach the hosts through the internal service names published by the distant cluster.
Attached is a log from an AMQ broker on cluster A, we can see clearly the federation client is trying to reach the other cluster B through it's internal service dns name.
Starting from line 4461 we can see traces of the dns name amq-broker-b-ss-0-amq-broker-b-amq-headless-amq-messaging-svc-cluster-local.
We can even see that it's actually trying to use the unsecured acceptor on port 61616(no ssl) that has been used to establish the cluster instead of the acceptor that has ssl enabled and is exposed through the Openshift route.
- clones
-
ENTMQBR-4445 unable to disable useTopologyForLoadBalancing for federation clients
- Closed